An extremely simplified optics laboratory for teaching the fundamentals of Fourier analysis

In this paper, we describe easy and cheap optics experiences for teaching undergraduate students the fundamental properties of Fourier transforms on an experimental basis. By exploiting the eye as the Fourier transforming device, a common magnifying lens, and quasi-coherent light from a small white LED, quantitative results can be easily obtained about the fundamental theorems used in Fourier analysis. The concept of coherence is also introduced in an elementary way. This approach has been successfully adopted to teach third-year students in physics, who operated in a completely autonomous way during the COVID-19 lockdown, without access to a laboratory. This proves the effectiveness of the method. Thanks to its experimental simplicity, it can easily be extended to teach and show the fundamental effects to a larger audience, including high-school students.


Introduction
The fundamentals of Fourier analysis are of the utmost importance for undergraduate students. Examples are ubiquitous, and it is commonly used to address physical and mathematical concepts in many fields, as evidenced, for example in [1]. Learning from practical applications, as is possible in the laboratory, represents a further advantage, complementing the most traditional and formal approaches. More than ten years of teaching Fourier analysis in a dedicated optics laboratory has provided strong evidence of this advantage. The COVID-19 lockdown in 2020 immediately demanded a complete revision of the experimental approaches, which were usually based on a well-equipped optics laboratory. This implied an extreme simplification of the student experience, eventually enabling them to operate independently with almost inessential, very cheap instrumentation. The present work stems from the experience of students on the third-year laboratory course for the master's degree in physics. Despite the important limitations of this experimental approach, it represents an example of how an experiment can be simplified, giving everyone the chance to perform quantitative observation with results similar to those usually obtained in the laboratory. The experience is sufficiently simple as to be suitable for secondary school students as well, even if a completely quantitative approach might be better understood by undergraduate students.
In the following, the basic elements of Fourier analysis and a few fundamentals of optics are first introduced, fixing the formalism adopted across the entire work. The formalism is closely aligned with Goodman's book [2], which teachers and students can refer to as a guide for the arguments treated within this work, as well as for further insights about a number of issues, exercises and interesting applications in many fields. A brief introduction is given regarding the concept of spatial coherence and the proper way to generate light to approach the experiences. The naked eye is then introduced as the basic Fourier transforming 'device' to measure the Fraunhofer diffraction patterns of some example cases. The basic theorems of Fourier analysis are then reproduced and quantitatively discussed through suitable experiences performed with common objects and materials. A colored photographic filter can be used for better quantitative analysis. After that, with the aid of a simple magnifying lens, the integral Fourier theorem is reproduced, with the naturally consequent applications to the spatial filtering of wavefronts, the imaging of phase objects, and a discussion of the Abbe-Porter experiments. By introducing specific objects, easily obtained with traditional photographic film, additional experiences can be performed with better quantitative analysis. The perspectives of further experiences and applications are also discussed.

The fundamentals of Fourier analysis
Linearity is a property shared by a number of physical phenomena. In short, the advantage of linear systems is the ability to obtain the response of a system to a given stimulus as the linear combination of the responses to 'elementary' stimuli. Fourier analysis is a fundamental tool for dealing with linear systems, and optics represents an ideal platform to represent its basic properties, hence the name 'Fourier optics'. Two main classes of systems can be distinguished, depending on whether or not the light is endowed with a property known as spatial coherence (see below). In the former case, light is described by wavefronts represented as complex-valued functions; in the latter, real-valued intensity distributions are used. Here, we focus on introducing the fundamentals of the Fourier analysis of linear systems with complexvalued functions. The real-valued case can be obtained as a special case, and can be found in many textbooks ( [2], for example). Optical systems, specifically diffractive optics, naturally need to adopt two-dimensional (2D) Fourier analysis. This comes from the general description of wavefronts as complex-valued functions of two independent variables, i.e., they are defined in a 2D domain.
Provided that the necessary existence conditions are guaranteed, a given function g (x, y) can be decomposed into its Fourier components, 'elementary' functions of the form exp [i2π ( f X x + f Y y)]. Here, i is an imaginary unit and f X and f Y are the spatial frequencies in the coordinate directions x and y. Such functions are simple plane waves, with the peculiar property of keeping their spatial frequencies constant across the (x, y) plane. Across this plane, the lines of constant phase follow the equation: where m is an integer. The spacing is given by d = 1/ f 0 , where f 2 0 = f 2 X + f 2 Y , f 0 being the spatial frequency of the plane wave, with 1/ f 0 > λ. The elementary function introduced above can be interpreted as a section of the (x, y) plane of a three-dimensional plane wave. The plane wave has wavefronts perpendicular to a direction forming angles α, β, γ with the three coordinate axes. These angles are defined by the three directional cosines A = cos α = λ f X , The complex amplitude of the plane wave in the whole space (x, y, z) is completely fixed by the elementary function in the (x, y)plane. Finally, it can be easily shown that the plane wave propagates through space forming an angle ϑ with the optical axis, z, according to the relation sin ϑ = λ f 0 (ϑ is complementary to the angle formed with the (x, y) plane, which is that defining the directional cosines). For small diffraction angles, this expression can be approximated as ϑ ∼ λ f 0 , which will be useful below.
We attract the reader's attention to the following basic, complementary observations.
(a) A (thin) grating imposing a sinusoidal amplitude modulation with a spatial frequency f 0 will generate only two symmetrically diffracted plane waves, (see [2]) propagating at angles ±ϑ, fulfilling the elementary grating relation: Therefore, we conclude that sin ϑ = λ f 0 , which is the same expression obtained above for the plane wave. More precisely, if f 0 has components f X and f Y , then the diffracted plane waves will have spatial frequencies ± f X and ± f Y . The peculiarity of this grating is represented by the existence of one diffracted order, at variance with common gratings that produce several or many plane waves upon diffraction. This is strictly related to the elementary function introduced above as the basic Fourier component: the sinusoidal grating represents the essence of diffraction, being simultaneously the simplest function for decomposing any amplitude modulation and the modulation generating one diffracted order. Any amplitude modulation A (x, y) can be viewed as the superposition of sinusoidal gratings: the amplitude and phase values, A( f X , f Y ), give the Fourier transform of A (x, y), and the superposition of the corresponding diffracted fields gives the diffraction pattern of the whole modulation itself. (b) If the plane wave mentioned above is superimposed onto a unit-amplitude plane wave (reference wave) propagating along the optical axis z, exp [ikz], and they interfere onto a screen perpendicular to z, the interference term will produce the intensity modulations described by: The interference pattern will be the 'footprint' of the plane wave on the screen. This is also called the hologram of the plane wave: if illuminated by a unit-amplitude plane wave identical to the reference wave, it will generate a copy of the incoming plane wave propagating at an angle ϑ such that sin ϑ = λ f 0 , plus a 'new' plane wave diffracted at a symmetric angle, −ϑ, Holography simply means 'complete drawing' in Greek and is named after its ability to 'draw' the amplitude and phase of the incoming waves, although uncertainty remains about the sign of the phase.
The above is what is needed to figure out the basics of Fourier analysis through elementary optics experiences. In the following, we will focus on the ability of simple lenses to produce Fourier transforms, and exploit this to reproduce the fundamental theorems. To introduce the formalism and language, we first recall the basic properties of lenses. For simplicity, we will refer here to Goodman's book [2], from which we follow the formal approach that is particularly clear in introducing Fourier analysis.

Fourier transforming properties of lenses
A thin lens is considered here as the simplest optical element that imposes a spherical shape on a plane wave that passes through it. Such a lens in the plane (ξ, η) will impose a phase modulation: where f is the focal length. Let there be an 'object' imposing an amplitude and phase modulation described by the complex transmission function T(x, y). Just beyond the object, the wavefront will be described by the same function. Using the Huygens-Fresnel integrals and neglecting phase terms, in the focal plane (x, y) of the lens we obtain: The positions (x, y) in the focal plane identify angles ϑ X = x/ f, ϑ Y = y/ f, that can be converted into spatial frequencies as introduced above: f X = x/λ f , f X = x/λ f . By substituting these into the previous expression: Apart from phases, this is the Fourier transform of the function T(x, y). The intensity distribution at the focal plane will only provide the power spectral density of T(x, y), that is the squared modulus of its Fourier transform. We stress that this result shows the plane wave decomposition of the transmission function, but it has been rigorously obtained on the basis of the Huygens-Fresnel approach, that stems from the description of wavefronts using spherical elementary waves. We can also cast things as a very simple consequence of basic geometric optics. In fact, the fundamental lens equation relates the position of the conjugate planes at distances p and q from a lens of focal length f (see figure 1): 1 p + 1 q = 1 f : at the focal plane, i.e. for q = f, the corresponding conjugate is set at infinity. This means that for an image at the focal plane, the object is solely described by angular positions. Therefore, a given position (x, y) in the focal plane will be illuminated by the intensity of a plane wave entering the lens with angles ϑ X and ϑ Y defined above, that correspond to the spatial frequencies ( f X , f Y ): this gives the power spectrum of the wavefront.
A further insight into the Fourier spectrum can be given using a simple lens. A wavefront described by the function T (ξ, η), as above, is collected by a lens and light propagates downstream to a plane (u, v). Both planes are at a distance f from the lens, that is placed at the plane (x, y), z = 0. By exploiting the Huygens-Fresnel integrals and plane wave propagation again, it can easily be shown that the field complex amplitude in the plane (u, v) is the exact Fourier transform of the field complex amplitude in the plane T (ξ, η). In symbols: As above, the coordinates (u, v) play the role of spatial frequencies: The formal proof of this fundamental result can be found, for example, in [2]. We will return to this peculiar property of lenses below, when discussing the integral Fourier theorem.
We stress that in all imaging systems operating with the object at infinity, the intensity profile of the image is nothing more that the power spectral density of the wavefronts entering the objective. We base our experiences upon this very simple result.

Spatial coherence
When using a lens as a Fourier transformer, it is necessary to transilluminate the object with a 'perfect' plane wave, as described by the theoretical description above. Light emitted by common sources, including cheap laser pointers, does not meet this criterion. Nevertheless, a simple, non-trivial argument demonstrates the solution using the concept of spatial coherence, which is briefly recapitulated here to introduce the experimental requirements for performing simple experiments with common white-light sources. The basic elements of coherence can be found, for example, in [1,3,4], while more detailed approaches are given in [5,6]. A remarkable application of the concept of spatial coherence arguments has been demonstrated by the Michelson stellar interferometer [7,8].
Coherence can be interpreted-or just defined-as the ability of a wave to form (stable) interference fringes. Stable interference fringes can only be obtained with the stationary relative phase. Two distinct concepts can be distinguished: spatial coherence and temporal coherence, depending whether the correlations are considered among two points in space (at a fixed time) or in time (at the same position). The advent of lasers, and more recently, of extremely cheap laser sources, has made temporally coherent light readily available for many elementary optics experiences. Nevertheless, most of these experiences were originally performed with natural light, the actual need being spatial, rather than temporal, coherence.
Let us consider a radiation source that extends over a distance D. Radiation passes through a couple of pinholes at a distance d from one another and at a distance z far from the source (see figure 2(a)). A screen is placed at a distance z from the pinholes, to observe the superposition of the emerging fields. Let z be large enough to allow the radiation diffracted from the pinholes to superimpose. Each point P of the source generates a field that, upon diffraction by both pinholes, forms Young's interference fringes on the screen. The positions of the interference maxima and minima depend on the relative phases of the diffracted waves, i.e. ultimately on the relative positions of the source emitter and the two pinholes. In the case of an ideal point-like source, Young's fringes are always visible. Otherwise, whether the interference fringes will sum together or cancel out is a nontrivial question to be addressed. From a formal point of view, we define the mutual intensity of the field at positions r 1 , r 2 as [1,5]: where indicates time averaging over a time much longer than the wave period. By introducing the local intensities I 1 = |U(r 1 )| 2 and I 2 = |U(r 2 )| 2 , the quantity γ = Γ √ I 1 I 2 provides the fringe visibility, or contrast [6].
The minimum distance between the pinholes, such that the relative phases of the emerging waves change by 2π rad is known as the transverse coherence length. This is also the condition for forming stable interference fringes onto the screen at z , so that we can redefine the coherence length as the minimum distance between pinholes, such that the visibility of the interference fringes vanishes. In other terms, the coherence length can just be interpreted as the minimum spacing of the interference fringes formed on the screen by the superposition of any couple of point-like emitters at the source. Provided that the inter-pinhole distance is smaller than the spacing of the interference fringes formed by the wavefronts W 1 and W 2 from the two furthest emitters at positions x 1 and x 2 (figure 2(b)), the two pinholes are illuminated by a field with fixed phase. This phase can also change with time, even very fast, depending on the emission processes, but the relative phase of the fields at the two apertures is always fixed, and such is the position of the interference fringes at the observation screen. The spacing between the interference fringes formed at the pinholes by waves from x 1 and x 2 is simply given by: where θ is the angle subtended by points x 1 and x 2 , that is also the angular aperture of the source observed from the pinhole position. Equation (9) comes from a very fundamental theorem, independently obtained by Van-Cittert and Zernike and named after both. It provides the fieldfield, two-point correlation function of the field at a distance z from a source of linear extension D as the Fourier transform of the source transverse intensity profile. We can conclude that the illumination of the pinholes can occur under two opposing conditions [2,9]: fringes are present at the observation screen. The system is linear in the (complex) fields.
Λ: the observation screen is uniformly illuminated. The system is linear in intensity.
Equation (9) allows to set up the illumination conditions to achieve coherent illumination over a sufficiently large transverse area. Let us exploit a common LED illuminator, from a mobile phone for example, about 2 mm in size and with a wavelength of approximately λ = 0.5 μm. At a distance L = 5 m, the transverse coherence length is approximately 1.25 mm. This can be considered as the minimum for the experiences described below. A larger distance would be better. Alternatively, by placing a diaphragm 1 mm in diameter at the source L = 5 m, we can increase the coherence length as if we were further away. In the following, we will assume a coherence length of 2 mm, specifically obtained in the examples reported below with a 2 mm LED at L = 10 m, if not otherwise specified.

Fourier transforming with the naked eye
The most common and available optical system is the human eye. It is composed of a lens forming images onto a screen (the retina) that contains the sensible cells. In what follows, the crystallins will be considered as a thin lens, the focal length being typically f = 17-18 mm. From this consideration, it follows that by transilluminating an object with a spatially coherent wave and looking through towards the source with the eye adapted to infinity, the retina will be illuminated by a light intensity distribution that is the power spectral density, or power spectrum, of the wavefront transmitted by the object.
Quantitatively speaking, let the wave illuminating the object be denoted by U 0 . It has uniform amplitude and phase over a plane of coordinates (ξ, η), orthogonal to the propagation direction and placed at a distance z < 0 from the lens. An object transilluminated by this wavefront introduces amplitude and phase modulations described by the complex function T (ξ, η). The emerging field will then be: To measure the corresponding power spectrum, we simply need to look through the object towards the source, with the eye adapted to infinity. The intensity distribution at the retina will then be given by: where f is the crystallins' focal length and f X , f Y are the spatial frequencies of the plane waves the wavefront can be expanded into. By substituting the relations for the spatial frequencies in terms of the propagation angles of the plane waves with respect to the optical axis z, we have the linear positions on the retina: (under paraxial approximation). This result is useful for making quantitative considerations from the observed power spectra. An easy 'calibration' enables the observer to perform quantitative measurements in terms of spatial frequencies. By looking through the object with one eye, the other will be available to look at the source and its surroundings. Two advantages are achieved: (a) both eyes will be infinity adapted, so that the Fourier transforming eye will operate properly; (b) a scale placed in the source surroundings (all the better if it is orthogonal to the line of sight) will allow the measurement of the subtended angles ϑ X and ϑ Y , and therefore the spatial frequencies f X and f Y . We can perform quantitative measurements of Fraunhofer diffraction patterns with the naked eye.
It is useful to fix some numbers for the eye and the corresponding limits when viewing objects in terms of spatial frequencies. The angular resolution of the eye is commonly about ϑ min ∼ 1 arcmin ∼ 310 −4 rad. With a focal length f = 17 mm, this corresponds to a transversal distance on the retina x = fϑ min ∼ 5 μm, or a spatial frequency of f 0 = ϑ min /λ ∼ 6 mm −1 for a wavelength λ = 0.5 μm. A grating with such a spatial frequency will give diffraction maxima at the smallest distance perceivable by the eye. The accessible range of spatial frequencies is reasonably extended to allow for experiences with common materials and objects to be Fourier transformed by the naked eye.

Fraunhofer diffraction by apertures and obstructions
Examples are provided here as suggestions for work that uses the Fourier transforming properties of the eye. In this section, we will focus on the measurement and characterization of the power spectrum. First, let us consider a small aperture, a pinhole with a diameter of a few 100 μm. It can be easily obtained by means of a sharp pin gently pressed onto common thin aluminum foil, preferably placed on a rigid plane. By looking through the hole illuminated by the LED source as detailed above, the corresponding Airy disk will appear with a number of satellite rings [2,3,6]. The angular position ϑ of the minima (zeroes) is related to the aperture's diameter: that can be precisely measured according to the calibration described above. An example of a diffraction pattern from a calibrated circular aperture 300 μm in diameter is shown in figure 3(a)), here obtained through a digital camera instead of the eye. We stress that pictures are just reported as a rough example of the results, the vision of the naked eye giving much better insight into the actual diffraction patterns. We recall that the positions of the minima in the diffraction pattern of a circular aperture with diameter D = 2a are given by the zeroes of A different example is represented by an obstruction, as easily obtained by studying the diffraction from a hair. In this case a linear diffraction pattern is obtained (see figure 3(b)), extended in a direction perpendicular to the hair. Note that in this case the 1.22 prefactor in equation (13) is missing [2,3,10], since it comes from the Fourier transform of the circular aperture. For the first minima position at a distance L = 10 m, we find values ranging from 50 to 90 mm from the center, depending on the hair's diameter. Note that it is convenient to measure the distance of the null intensity. For example, a thin hair gives 85 mm, corresponding to an angle ϑ 1 = 8.510 −3 rad and a diameter of approximately D = 60 μm.
Looking through the aperture, the observer will notice the chromatic dispersion associated with the diffraction of white light. This is well described by equation (13), where the angle and wavelength are directly proportional. Therefore, a precise estimate of the minima positions could be an issue, since minima occur at different angles in the diffraction pattern. Although slightly more demanding, a better result could be obtained by filtering the light from the white source with a proper bandpass filter. We note that a narrow band is not very convenient, since it will dramatically reduce the light power, making observation even more difficult. By contrast, a common photographic filter transmitting one color band will remarkably reduce the chromatic dispersion effects sufficiently. For the best choice of filter, the emission wavelength spectrum of the white LED should be considered: the common peak at around 450 nm, given by the blue die of the device, suggests a blue filter should be better than a green one, where the spectrum is at a minimum. Alternatively, a red filter can be used, although a much wider spectral bandwidth is introduced by the yellow and red phosphors. These spectral features also explain why, in the diffraction patterns of white LED light, blue and red appear almost separate. Color dispersion, as well the advantages operating with a filter will apply to the experiences described below.
Young's experiment is a special case of Fraunhofer diffraction, as clearly discussed in [10]. Originally exploited as a proof of the wave nature of light, a very important extension was introduced by Wolf, who exploited the same scheme to measure spatial coherence [6,11]. In figure 4(a) the result of a typical Young's experiment is shown with apertures made from two pinholes approximately 300 μm in diameter, spaced at 1 mm, obtained using a needle in common thin aluminum foil. In figure 4(b), a red filter is placed in front of the LED source. In figures 4(c) and 3(d), a similar experience is repeated at two distances, L = 8 m (c) and L = 4 m (d) from the light source, respectively, thus making the transverse coherence length change from larger to slightly smaller than the pinhole spacing. The ultimate meaning of coherence is evidenced here, the fringes losing their visibility when coherence is reduced (d). By systematically measuring the visibility of fringes as a function of the pinhole spacing, a measurement of the degree of coherence at a given distance from the source is possible. The widespread realm of coherence stems from this apparently simple observation, as is made clear by reference to the works of Wolf himself [11], or to the introduction to the Michelson stellar interferometer [7].

Thin sinusoidal amplitude grating
Although it might be considered a bit strange, a grating imposing a sinusoidal amplitude profile onto the transilluminating wavefront actually represents the ultimate grating: any transmission function can be thought to be the sum of an appropriate number of these gratings, according to Fourier's theorem, which is why we discuss this ideal transmission function. Despite its apparent peculiarity, it can be obtained quite easily with some trials using several suitable samples. A grating imposing sinusoidal amplitude modulations across the plane wavefront generates 0th order and first-order maxima only [2]. Thus, it is quite easy to recognize it just from its power spectrum, which is the Fraunhofer diffraction pattern. For example, such a grating can be likely well approximated by a simple fabric, where the (organza and gauze) warp and weft act as two perpendicular gratings. Different fabrics can be tested, by looking for the cases where the number of diffraction maxima is minimum, ideally one. In such a case, we have evidence that the system behaves as required. The material's depth changes across each thread, thus introducing an oscillating smooth wavefront modulation that mimics a sinusoidal grating. Figure 5(a) shows the Fraunhofer diffraction pattern from a common thin white towel. The Fraunhofer diffraction from an (almost) ideal sinusoidal grating is shown in figure 5(b), as obtained with the same setup above. Realizing good sinusoidal gratings is not too difficult when taking pictures with an old-fashioned film camera of a proper printed pattern. By means of computergenerated pictures, a convenient greytone profile can be obtained on film to impose sinusoidal modulations onto a plane wave. Taking pictures in conditions of bright illumination and perpendicular to the mask will produce suitable gratings on the film with a groove distance that can be simply imposed by changing the camera-to-picture distance. Care should be taken to use suitable exposure times when taking pictures. In [2], the realization of computer-generated holograms with photopolymers is extensively described.
Note that a comparison can be done with the most common Ronchi rulings, or Ronchi mask. This is a peculiar widely spaced grating (of an order of magnitude of 100 microns) characterized by apertures and stops of the same size, with sharp edges. A detailed discussion of the Ronchi ruling and its applications in optics can be found in [12] (in Italian). When illuminated by a plane wave, it generates a number of diffraction orders due to the large spacing and because of the sharp borders that introduce Fourier components, even at high spatial frequencies. In figure 5(c) the Fraunhofer diffraction pattern of a Ronchi ruling is shown. Here, the chromatic dispersion, increasing with the order number, is much more evident than in the previous case. Finally, in figure 5(d) the same measurement is shown by red filtering, as already done in figure 4.
For the sake of experimental simplicity, in the following sections, we consider the fabric that produces one order of diffraction as the benchmark for gratings in almost all the experiments described.
The angular position of the diffracted orders provides a measurement of the groove spacing. A difference arises between the two perpendicular directions due to different spacing between the warp and weft and, in some cases, different thread diameters. The information about all these features is contained in the power spectrum observed by the eye, and quantitative measurements can be done. We focus on the measurement of the groove spacing, d, from the grating law: From the diffraction pattern formed by the towel (figure 5(a)), the linear distance between the second-order diffraction orders (maxima) is about 90 mm at L = 10 m. Therefore, we obtain: Since n = 2, so that nλ ∼ 1 μm, we obtain d = 220 μm for the groove spacing. This value can also be verified with an independent measurement in the direct space with a magnifying lens and a ruler. We stress that the difference between the sinusoidal and Ronchi cases represents genuine evidence of Fourier transform fundamentals. For a fixed spacing, one diffraction order corresponds to a smooth, sinusoidal transmission function. By contrast, a huge number of diffraction orders is the fingerprint of the sharp borders of the Ronchi ruled aperture and stops.

Basic Fourier transform theorems
The naked eye, when exploited as a device for the Fourier transform of transmission functions, allow the realization of simple experiences that reproduce the basic Fourier analysis theorems. Since we always access the power spectrum of the transmission function, some issues have to be considered when interpreting diffraction patterns, as we point out below.

Similarity theorem
Let G ( f X , f Y ) = F [g(x, y)]; then: Let us focus on the diffraction from the fabric, fixing the positions of the diffraction maxima.
If we now slightly stretch the fabric in one direction, it elongates, but it also shrinks in the perpendicular direction. The diffraction pattern shows the opposite behavior: maxima contract in the direction of tissue elongation, while the distance increases in the perpendicular direction. In terms of the theorem above, stretching the coordinates (x, y) in the direct space causes a contraction of the coordinates in the frequency domain. A change in the amplitude of the Fourier components also takes place (but is almost invisible, in our experience). Generally speaking, the similarity theorem ultimately explains the complementary role of Fourier analysis in either the direct or the Fourier space. In brief, phenomena that are too small, or too fast, to be studied in the direct space can easily be characterized in the Fourier domain, and vice versa. This is a simple example of the relations among conjugate variables in waves, which has its counterpart in Heisenberg's uncertainty principle in quantum mechanics [1,4].

Linearity theorem
Let (17) Despite its essential simplicity, the realization of this theorem requires some attention. We need a transmission function that is the sum of two given functions. It is common for students to consider two hairs or even the tissue itself as good candidates for this experience, due to the apparently clear superposition of two or more obstructions. It must be noticed that this superposition is not actually a sum, but a product. By contrast, two separate apertures (e.g. slits) are also suitable for this purpose. Notice that if they superimpose, again, the resulting function is a product. Unfortunately, producing two or more separate apertures that are small enough and close enough to realize the experience is quite difficult (except for the case of the pinholes in Young's experiment discussed above). To overcome this difficulty, we invoke Babinet's principle, which states that the fields diffracted by an aperture and a complementary obstacle have identical amplitudes and opposite phases. Therefore, we can safely operate with two separate obstructions, thus generating the same fields that would have been generated by two apertures. The two fields sum and form the corresponding power spectrum. We illuminate two hairs, preferably of different diameters, preferably not crossing, not parallel, and close enough to fall within one coherence area. In figure 6(a) we report the result of such a measure. Hairs can easily be measured independently, and the corresponding diffraction patterns are easily recognized, thus evidencing the linearity theorem. Two comments are worth making here. We actually realize the sum of two fields with the 'wrong' object, actually representing the product of two functions. Since we are interested in summing the fields, we can still operate correctly, thanks to Babinet's principle. Clearly, the interpretation is crucial here, and the product case must be considered properly, as will be discussed for the next experience. The second point is that we access the power spectrum, that is, the squared modulus of the sum of the two fields. Therefore, cross-terms arise that give origin to additional, much fainter maxima (invisible in figure 7(a)). The case of parallel Figure 7. (a) Diffraction pattern obtained with two identical sinusoidal amplitude gratings 5 mm apart along the optical axis. In (b), the phase difference between the two gratings is such that the diffracted beams on one side (on the right of the picture) have opposite phases and cancel out by destructive interference.
hairs, or slits, better evidences these terms, being a revisited version of the Young's experiment. Again, provided that proper description is used to interpret data, the non-trivial effect of the interference pattern becomes an evident result of the linearity theorem.

Convolution theorem-resolution
Let y)]. Then: that is the convolution of the two functions. This theorem has several applications, as we will briefly discuss below. A number of combinations of transmission functions will give a clear realization of this theorem. We just introduce a couple of cases that could also be interesting in view of more general applications. The simplest realization of this theorem is given by something we already discussed above, the diffraction from a circular pinhole, for example. In its extreme simplicity, it shows that the incoming plane wave multiplied by the aperture function will generate a Fraunhofer diffraction pattern given by the squared modulus of the convolution between the two Fourier transforms: the Dirac delta and the Bessel function, respectively.
The previous example might appear too simplistic, as the convolution by a Dirac delta is ultimately a product. A slightly more interesting realization can be obtained by superimposing a grating and an aperture. For example, an aperture can be used as a diaphragm in front of the tissue, or the Ronchi ruling. The aperture must be properly sized to let at least some threads, or grooves, be illuminated and diffract the incoming light. In this case, each narrow spot formed by the grating will be enlarged to a size given by the power spectrum of the aperture itself. Again, both power spectra can be analyzed separately and compared with the power spectrum of their combination. Figure 6(b) shows the result obtained with a Ronchi ruling and the small diaphragm adopted for obtaining the pictures shown in figure 3.
As mentioned above, the transmission function imposed by the fabric is the product of two independent functions imposed by the two perpendicular gratings represented by the warp and the weft. The corresponding Fraunhofer diffraction pattern will then be given by the squared modulus of the convolution between the two Fourier transforms. One can easily prove that this will give a diffraction pattern composed of the pattern of one grating, for example the warp, extended in one direction, repeated in the perpendicular direction as many times as the Fourier transform of the weft has diffraction orders: the convolution is then realized. This is identical to the case of two gratings summed (for example, by displacing them to separate positions on the plane), thus explaining the ambiguity mentioned above.
Besides the basic interpretation in terms of convolution, the results obtained so far also introduce a fundamental property of any optical system: the concept of resolution. Without entering into details about this item, we will simply note that by introducing a diaphragm, the diffraction pattern is affected by diffraction due to the aperture itself. This limits the capability of the system to distinguish or resolve close maxima. Because the Fraunhofer diffraction pattern can be considered as one of the basic schemes for imaging, we understand how optical systems are limited in resolution by their geometrical apertures. The Rayleigh criterion, for example, states that the ultimate limit for distinguishing two point-like sources is ideally obtained with the first diffraction minimum of the image of the former corresponding to the central maximum of the latter. It is an important practical estimate, but a more detailed analysis is needed, as the noise occurring in the system constitutes the ultimate limitation on its resolving power. Finally, it is important to note that it is not straightforward to extend the result obtained here (above) to the loss in resolution caused by looking through a diaphragm with the naked eye: in this case the wavefront passing through is not spatially coherent, and the image formation is not given by the squared modulus of the convolution, as above. We refer the reader to textbooks for a description of this difference.

Shift theorem
If That is, a linear translation is described by a phase shift in the Fourier space, the phase shift of a given Fourier component being proportional to the spatial frequency. This is almost obvious by thinking about the phase shift introduced into the argument of an oscillating function to translate it by a given distance. The realization of this theorem can be obtained in two ways. The former is by simply translating any aperture or object in front of the naked eye and looking at its power spectrum. As already noted above, it will be unchanged. Now, by noticing that the transmission function changes upon translation, one immediately concludes that only phases can change, since the amplitudes are maintained. Nevertheless, this can be unsatisfactory, or incomplete, since the phase changes cannot be measured.
The latter realization needs some care and is perhaps the most difficult experience described here. The realization can be experienced using simple components, as above, but requires very careful preparation and usually some amount of trial and error. Ultimately, since we are going to measure phases, an interferometric measurement is necessary. In order to avoid a very difficult procedure for students, it is much better for them to work with two identical Ronchi rulings or at least two custom-made sinusoidal gratings that have 20-30 lines/mm. Let us place the two identical gratings along the optical path, separated by a distance of approximately 5-10 mm. Both generate a 0th order and two first-order symmetrically diffracted waves, the latter grating being illuminated by the plane wave transmitted by the former. Provided that the gratings are identical, the diffracted beams superimpose in the same directions. They are two identical plane waves with similar amplitudes, so they will interfere depending on their relative phase. This phase difference is imposed by (i) the grating distance (ii) the relative position of the gratings in the transverse plane, according to the shift theorem itself.
Therefore, a condition will exist when the wave diffracted by the former will have the opposite phase with respect to the latter, and they will cancel out. It is straightforward to find that the same condition will give the opposite waves with the same phase, and constructive interference will be present. As a result, one will observe an asymmetric Fraunhofer diffraction pattern with one maximum instead of two symmetric maxima.
To reach this condition, the experimenter will have to very carefully move one grating transversely to the optical axis. By doing this, maxima will disappear alternately on opposite sides. If a micrometric stage is adopted to translate the grating, the measurement of the transverse distance between two positions where one maximum disappears will provide the grating spacing itself. This is in quantitative accordance with the shift theorem. Figure 7 shows an example of this condition obtained with two identical sinusoidal gratings. In (a), both maxima are present; in (b) the condition is achieved whereby one maximum is present, but the opposite maximum is cancelled by destructive interference due to the relative phase.
The effect evidenced by this experience goes well beyond a simple realization of the shift theorem. We have obtained an asymmetric diffraction pattern, in clear contrast to any other diffraction pattern encountered so far. This is related to another fundamental property of the Fourier transform: if either the imaginary or the real part of the input functions is zero, the Fourier transform is strictly symmetric. Any 2D amplitude modulation, such as those considered here, can be considered as a real-valued function, provided it is properly illuminated by a plane wave. On the contrary, in the case of two gratings displaced along the optical axis, this is not always true and the Fourier transform might be asymmetric. This is the essence of the additional condition introduced to Bragg diffraction for three-dimensional gratings [2,3].

The 4f system and the Fourier integral theorem
A remarkable step forward is possible by introducing an optical scheme of huge importance and utility in many applications of coherent optics. This is represented by the so-called 4 f system, schematically depicted in figure 8, slightly adapted to our case: the first lens with a focal length of f 1 performs a first Fourier transform on the wavefront; the second, with a focal length of f (here, the eye), collects the Fourier spectrum and performs another Fourier transform. Ideally, the 4 f system has two equal lenses spaced by twice their focal length: it collects the wavefront from the front focal plane of the first lens and double transforms it at the back focal plane of the latter, hence the name. Following equation (7), at the back focal plane of the latter lens we then obtain a wavefront that is given by: where U (ξ, η) is the wavefront at the entry focal plane of the system, U (u, v) at the exit focal plane. By considering this result in terms of fields, we notice that the double transformation has an immediate interpretation. Let G f X , f y = F [g(x, y)], then it follows that:  Thus, the double transformation of the field gives the field itself, with inverted coordinates. If, instead of Fourier transforming the spectrum G ( f X , f Y ) it is inversely transformed, the function g(x, y) will be obtained, according to the integral Fourier theorem. This also draws attention to the strict similarity that exists between the Fourier transform and the inverse transform; they are ultimately the same operator.
Here, it is worth commenting on the case whereby the two focal lenses are different, as they will most likely be in our system. The similarity theorem then plays its role, by bringing the fact that a simple scaling exists between the spatial coordinates in the two fields, to the results. In short, the coordinates are scaled as the ratio of the focal lengths.
As a consequence, we recognize the 4 f system, or its evolution with different focal lengths, as a realization of this fundamental theorem of Fourier analysis. In our experience, the latter lens is represented by the eye's crystallins, with the retina placed at the corresponding back focal plane (u, v).
We need to introduce an additional lens to perform the first Fourier transform of the field. To this aim, we exploit a simple magnifying lens, also known as a loupe. Any positive lens will work, of course, provided that the focal length is not much larger than the eye. The mounting typically present for these objects helps the experimenter, as will be clarified below.
First, we need to fix the front and back focal planes. By simply imaging an object that is distant enough (ideally at infinity) and well illuminated onto thin white paper, we immediately find both. In figure 9(a), a magnifying lens is shown to produce images at both focal planes. Let us then build two tubes, preferably black inside to avoid or almost reduce spurious reflections (stray light). The lengths of these tubes should be such that they finish exactly at the front and back focal planes. We can finally check this by imaging objects at infinity once more. In figure 9(b), we show an example of the mounted system. Now we are ready to realize the 4 f configuration. With the light source at a convenient distance, as above, we put an object in the front focal plane and look into the system with the eye just downstream from the back focal plane; the eye will perform the second Fourier transform. By adapting the eye at infinity, we will see the intensity distribution as transmitted by the object. Figure 9(c) shows an example of this result for the fabric mentioned above. A replica of the object will be seen, according to the integral Fourier theorem.
Further experiences can be performed using this device with various objects. Accurate observation will prove that the reconstructed field and the corresponding intensity distribution meet the scaling condition mentioned above, according to the ratio of the focal lengths.

Spatial filtering
The 4 f system introduces opportunities to operate on wavefronts. Accessing the Fourier spectrum in the intermediate focal plane ( f X , f Y ) allows us to introduce so-called spatial filtering, thus opening the way to a widespread variety of experiences. Here, we just focus on a couple of applications that are worth discussing in a basic Fourier optics course. Many other applications can be found and realized, as briefly discussed below.
In principle, operating on the Fourier spectrum of a wavefront can be done in many ways, by introducing amplitude modulations, phase modulations, or both in the intermediate focal plane. For the sake of experimental accessibility, here, we focus at amplitude modulations only, which are relatively easy to implement with our system. Therefore, we can introduce high-pass, low-pass, bandpass, and single sideband filters in the 2D spectrum. Low-pass filtering can easily be obtained by introducing a small diaphragm in the intermediate focal plane. High-pass filtering can be easily obtained by introducing a thin wire: notice that this also removes the transmitted beam, thus realizing a configuration similar to that adopted in dark-field imaging, often used in microscopy. We refer to specific texts to describe this application. We report here an example of single sideband filtering, adopted to visualize phase objects, as used, for example, in Schlieren photography. In figure 10(a), we show the 4 f intensity distribution obtained by transilluminating a thin plastic film, to be compared with (b), where a knife edge has been introduced vertically in the intermediate focal plane to remove half of the Fourier spectrum. Phase modulations invisible in (a) become evident. In (c) and (d), the same experience is repeated with the insertion of a photographic red filter. Notice that characterizing the intensity distributions obtained in such experiences does need a properly calibrated digital camera. An example of camera calibration is given in [13].

The Abbe-Porter experiment revisited
One of the most insightful applications of spatial filtering, and Fourier optics in general, is the Abbe-Porter experiment, which very clearly elucidates the Abbe theory of the interpretation of coherent image formation. Actually, we cannot realize this experiment in its original form, but a revised version based on pure spatial filtering is easily performed with our 4 f system. Basically, Abbe's theory describes the formation of an image by a lens as a specific interpretation of the fields in its back focal plane. Considering the wavefront entering the lens as its Fourier decomposition in plane waves immediately shows that the back focal plane is characterized by point-like sources, one for each plane wave, according to the Fourier transform. Plane waves can be interpreted as being due to the beam diffracted by sinusoidal amplitude gratings at the object plane (see above). A plane wave propagating along the optical axis is formed by the superposition of the corresponding 0th-order diffracted waves from all the gratings: it focuses on the optical axis. On the image plane, spherical waves emerging from these sources interfere with the transmitted wave, forming sinusoidal fringes: the image is then formed by the superposition of simple interference patterns, as the object has been assumed to be decomposed in terms of sinusoidal gratings. In our 4 f configuration, the scheme is fundamentally different: the field diffracted by the object is transformed twice. Therefore, we cannot realize the original Abbe-Porter experiment. Nevertheless, the spatial filter in the intermediate focal plane evidences the Fourier decomposition in plane waves and allows us to operate in the Fourier space, as was done in the original Abbe-Porter experiment.
For example, realizing the Abbe-Porter experiments can be done as described below. A 2D grating is imaged and a thin slit placed in the back focal plane of the lens parallel to one direction of the grating grooves operates as a 1D low-pass filter. Only the central portion of the 2D spectrum is allowed to propagate further, producing interference with the transmitted field. Interference fringes then degenerate along the direction perpendicular to the slit. We can introduce similar filtering by placing the slit in the intermediate focal plane of the 4 f system, that is, in front of our eye. The image of a 2D grating formed on the retina will then be composed of continuous lines perpendicular to the slit direction. The image is clearly missing the Fourier components needed to form the image of the grating in the other direction. In figure 11, the results of this experience are shown: in (a), the 2D grating is reproduced through the 4 f system; in (b), the wavefront from the same grating is passed through a slit placed in the intermediate focal plane. Again, in (c) and (d), we operate with red light. In (e) and (f ), the same experience is repeated with the 2D grating rotated around the direction of propagation: still, horizontal lines are only present according to the slit position, irrespective of the grating position.
The Abbe-Porter experiment is only one among a number of applications of spatial filtering that constitute the realm of optical information processing: other examples are Zernike's phase-contrast method, Schlieren imaging ( just mentioned above), Marechal's photography applications, character recognition through the Vander Lugt filter, etc. Developed throughout the twentieth century, these methods have been re-examined during the development of Figure 11. Revisitation of the Abbe-Porter experiment. In (a), the wavefront from a 2D grating is passed through the 4 f system; in (b), it is filtered by blocking all diffraction orders in the horizontal direction. In (c) and (d), the same is repeated with a red filter. In (e), the 2D grating is rotated, and in (f) the same horizontal filtering is operated, thus forming horizontal lines that are not in the object.
computer-based image processing, and are widely used in a broad class of software and applications that works on digital pictures.

Conclusions
Approaching Fourier analysis through simple experiences is of huge importance for undergraduate students, who are will exploit these methods in a widespread range of applications, within theory, experiment and numerical analysis. An example of numerical computation can be found in [14], where a number of numerical experiments can easily be performed. Here, I have introduced some very simple experiences that can be proposed to a class of students. Experiences can be performed by single students who will practice preparing an experiment and observing the relevant elements of a relatively complex resulting effect. Despite the extreme simplicity of the 'apparatus', accurate preparation allows students to enter into specific applications of the theorems and applications. Moreover, the experiences described in this work can easily be done without accessing any lab, even at home with common materials and objects. Improving the same experiences in an equipped lab might be a very interesting approach for further practice with laboratory equipment, but it is not essential. Last but not least, the experiments suggested here introduce no hazards at all, even when exploiting the naked eye, making it really safe to let students operate alone. This makes them also appropriate for use by high-school students, although the formal approach might be suitably reduced.
Using an LED light allows students to enter into the details of coherence, a matter of remarkable importance in physics. This is also interestingly related to Fourier analysis, exhibiting the complementary realms of spatial and temporal coherence. These arguments can also be adapted to lab experiences for graduate students.