Overview 4: Photography D: Exposure

The Choices

  1. Flow phenomenon: Water boiling? Faucet dripping? Why does it look like that?
  2. Visualization technique: Add dye? See light distorted by air/water  surface?
  3. Lighting: Continuous? Strobe? Sheet?
  4. Photography
    A: Framing and Composition
    B: Cameras
    C: Lenses
    D: Exposure
    E: Resolution
  5. Post processing: Creating the final output. Editing: at least cropping the image and setting contrast.

Now we’ll see how aperture and shutter speed combine to control how much light hits the sensor, and how ISO controls what the sensor does with all those photons to make an image. Let’s start with the end goal, and work backwards to the choices we have to make about camera settings to achieve that goal.

When we take a picture, we want to record a scene and get information on details in both the highlights (bright areas in the scene) and shadows (dark areas), if possible. To start, a goal might be for an average brightness level in the scene to turn into an average brightness in your image. Aperture, shutter and ISO are three basic camera settings we are able to control to achieve this goal, and there are a huge number of combinations of those settings that can record a given scene. Just like with focus, you can rely on your camera’s automatic choices, but when photographing flow vis, the full automatic settings are not likely to get you the best result. How to choose, then? Each of the settings will have a side effect in the image, and you will have to weigh the trade-offs between those side effects. For example, we’ve already seen how aperture controls depth of field (DOF) as well as the area for light to get through the lens and, thus, the exposure. We’ll look at the basics and side effects of shutter speed and ISO as well.

First, how much light is coming from the scene? All modern cameras have a TTL (through-the-lens) light meter of some sort. DSLRs have mechanical shutters protecting the sensor, so the meter will be on a light path split off from the optical viewfinder. Mirrorless and other LCD viewfinder cameras use the main sensor itself to measure light. Your camera can use this information in a dizzying array of programs or modes: it can average over the whole frame, it can use specific locations in the frame that you choose, it can use the whole frame but weight the value from the center more heavily in its calculation (center-weighted) and on and on. Depending on the camera, you may be able to choose the metering mode. The exact metering mode is probably not crucial because you can plan on making adjustments to the settings as you go along during the session – remember, you need to be shooting a lot, with varying settings! As a beginner, you may want to choose center-weighted to get started.

Then the camera will choose a combination of aperture, shutter speed and ISO, again depending on its AE (automatic exposure) programming. It may try to guess what your subject is and choose settings to give that subject a medium brightness in the final image while optimizing those trade-off side effects. This is where you will need to step in because in flow vis, the camera won’t be able to guess what you want. For now, you should choose “aperture priority” mode; in aperture priority, you choose the aperture size, and let the camera choose shutter speed and ISO.

Figure 1: Sequence of one stop increments in aperture. KoeppiK, CC BY-SA 4.0 via Wikimedia Commons.

Aperture and Exposure

On the last page, we saw that f/ goes with the inverse of the diameter for a given focal length. The f/ range is divided into increments, called ‘stops’, because in the early days of photography, before the invention of the variable iris, an aperture was just a hole drilled in a metal plate that was used to ‘stop’, or reduce light passing through a lens. This is why f/ is pronounced “eff stop.” One stop, as a unit of measurement, is a factor of two in the amount of light. If you cut the area of the aperture in half, you have ‘stopped down’ by one stop, and only half the light is passing through . Since  f/ goes with one over the diameter, not the area, there is a nonlinear relationship between f/ and stops, as shown in Figure 1. If you work in aperture priority much, you may want to memorize this common one-stop sequence of f/ 1.4, 2.0, 2.8, 4, 5.6, 8, 11, 16, 22. A more modern term for this ‘factor of 2 in light’ is EV, exposure value , and your camera may allow you to change settings in 1/2 or 1/3 EV increments.

Shutter Speed

Shutter speed is how long the shutter is open. Like aperture, there is some mixed-up nomenclature specifying it. It’s usually given as a fraction of a second, with the fraction-of part being assumed. So if your camera tells you it is using a shutter speed of 30, it really means the shutter is open for 1/30 of a second. If it says a shutter speed of 30″, with a quotation mark or other symbol, it probably means the shutter is open for 30 seconds, but RTFM for your camera. At the slow (or long) end of the scale, you may be offered the option of T or B. T means ‘time,’ and the shutter will open when you press the shutter button, and then stay open until you press it again. B means ‘bulb,’ referring to an ancient pneumatic type of shutter release; the shutter would stay open as long as you squeezed the rubber bulb. Today, this means as long as the button stays pressed. Better yet, download the wifi app for your camera and control it from your laptop. That way your hand on the button won’t shake the camera. Note that you’ll probably want a tripod for any exposure longer than 1/30th of a second; all humans shake a little, causing your image to smear during a long exposure. You can get away with slower shutter times if you have image stabilization, but less so if you are using a lot of zoom. For flow vis done indoors a lightweight $15 tripod will be sufficient.

A shutter prevents light from hitting your sensor except when you choose. This technology dates back to when a photosensitive film had to be kept in complete darkness except when being ‘exposed’ to light to form the image; hence the term ‘exposure’ for making an image. Mechanical shutters are still common in DSLRs, and even in mirrorless but are rare in video and phone cameras. Instead an ‘electronic shutter’ (more or less) simply turns the sensor on and off. This is another area where technology is changing rapidly, leading to hybrid electronic-mechanical shutters, since each type of shutter has pros and cons. We’ll just cover some basics here.

Figure 2: A simulation showing the effect of a rolling shutter on a spinning disc. The jagged appearance is due to the small number of rows; the higher number of rows in a real camera results in smoother curves. cmglee, CC BY-SA 3.0, via Wikimedia

Mechanical shutters in DSLRs are located right in front of the sensor and are called ‘focal plane shutters’. They are made of two curtains that move vertically. The front, or first curtain moves (up, down or sideways, depends on the design) revealing the sensor. The second, or back curtain follows in the same direction. If the exposure is relatively slow, the whole sensor is exposed before the second curtain starts moving. At shorter exposures, the second shutter starts moving before the first shutter has completed its motion. This defines a ‘rolling shutter’ . You can get odd effects with moving subjects because the top of the image is exposed at a slightly different time than the bottom, as illustrated in Figure 2. This can be a real problem in flow vis; it is difficult to detect unless you know what the flow was supposed to look like.

If you are using a flash for illumination, you want to trigger it while the whole sensor is exposed. The shortest time for this full exposure is the ‘synch speed’ for the camera. It might be 1/60th or 1/250th of a second, check your manual. Some flash units offer ‘high speed synch;’ this is basically a long flash that matches the duration of the high-speed rolling shutter, which may be good for portraiture but you lose the time-freeze aspect of a short flash, so I don’t recommend it for flow vis. Mechanical shutters are limited in how fast they can open and close; 1/8000th of a second is near the top for prosumer DSLRs. Other drawbacks are that mechanical shutters cause vibration, make noise, take time to reset, and can bounce.

Electronic shutters are also limited: photons hit the sensor (usually a CMOS type), are converted into a charge, and the charge is then ‘read’ or measured, and the result is stored as digital information. There are a range of methods for doing this. Typical electronic shutters are rolling shutters; data is read line by line, a relatively slow process, although shutter speeds on the order of 1/70,000 seconds are claimed for some phone cameras . Depending on the motion, and details of the readout, an assortment of artifacts can distort the final image. Flash can be tricky: phone cameras use LEDs to create relatively long flash so that the whole image is illuminated. More expensive shuttering, such as those in professional high-speed video cameras, use high-speed data buffers to capture the information quickly, and then store the data more slowly . Global shutters, that capture the whole frame at once, are currently rare and expensive, but this will likely change in the near future.

Shutter speed in video is limited by the frame rate.  Each frame can have an exposure time up to — but no longer than — the frame rate.  You can choose faster shutter speeds without changing the frame rate, but the result may feel ‘unreal.’ Give it a try! The recording frame rate can be more or less than the playback speed which is usually kept at a comfortable 30 fps (frames per second). If you shoot at a higher frame rate than the playback rate, it’s ‘high-speed,’ or ‘slow-motion.’ If you shoot slower than the playback speed, it’s considered ‘time-lapse.’ Time-lapse videos of clouds are often shot at 3 to 5 seconds per frame, so the shutter speed can be quite long also, allowing time-lapse of sunsets lasting into the night.

Figure 3: Water thrown from a wet tennis ball forms spiral streams as each droplet starts out with the tangential velocity of the surface plus some radial velocity. Droplets that leave the same location on the sphere a bit later have rotated a bit further and start with a slightly rotated velocity vector, resulting in spiral streams. William Derryberry, Kristopher Tierney, Team Third Spring 2014.

Why do we care about shutter speed in flow vis? If the flow is moving while the shutter is open, the image will be smeared in the direction of the motion. This is called ‘motion blur.’ Sometimes we’ll want to use a fast shutter speed to freeze the flow, and sometimes we’ll want a slow shutter to intentionally blur the flow or show particle tracks. Figure 3 shows an in-between case; droplets close to the spinning ball are entirely blurred together, and slower-moving droplets further out have short particle paths. Once the particle track is on the order of a few pixels in length, it looks like it’s standing still. This image was made with a shutter speed of 1/1600 seconds.

Hands On! Do This Now!

What is the range of shutter speeds on all your cameras? What is your camera’s synch speed? Can you make a T or B exposure? If you have a DSLR, can you lock the mirror up for a long exposure?

Figure 4: Sensor sensitivity increases with ISO. JeanBizHertzberg via Wikimedia Commons.

ISO: Sensor Sensitivity

The third major exposure control is the sensitivity, or gain of the sensor. First we need to talk about the anatomy of a sensor just a little. We’ll see this in more depth when we talk about resolution.

Camera sensors are really a packaged grid of tiny sensors: pixels, from picture element. For our purposes, we can think of a pixel as made up of a red, a green, and a blue sensor element, tightly bundled together, although the reality is more complex . When digitized, the light hitting each pixel is rendered as a set of three numbers, one each for the R, G, and B color channels. For now, assume that the range of numbers can go from 0 to 255; if the pixel value is (0,0,0) it is black; (255, 0, 0) is pure red; (255, 255, 255) is full white; and (127, 127, 127) is a medium gray.

Each color channel in each pixel is a transducer of a sort; it converts light into a number. ISO is a measure of how sensitive the sensor is; like the gain in an amplifier circuit. Figure 4 shows an idealized mapping between light and the resulting number. A low ISO means the sensitivity is low; it takes a lot of light to get to the highest possible pixel values, and details in the shadows may be lost. When the sensitivity is doubled, going from ISO 100 to 200, the highest value is reached at half the light intensity, measured on a log scale. Light brighter than that shows up as saturated because the pixel is already at the max it can read. This overexposure is referred to ‘blowing out the highlights;’ all light levels above the saturation point will show as pure white with no details. In flow vis terms, you have lost the information contained in the highlight details. You should generally avoid this when shooting; if you want to white-out the background, you can do it later in post-processing, but keep your options open by capturing the most information when shooting. Cameras are now including an overexposure indicator called a ‘zebra’ viewfinder mode which shows you blown-out areas as a pattern of stripes, so you can adjust to avoid this before making the exposure.

Figure 5: Noise clearly visible in an image from a digital camera. Lugiadoom at English Wikipedia, Public domain, via Wikimedia Commons.

Back in the days of film, sensitivity was called ASA because the standard was set by the American Standards Association which evolved to become ANSI. Similarly, digital sensor sensitivity is set by ISO, the International Standards Organization, which unhelpfully numbers all of its standards. In the world of photography, however, ISO is taken to mean sensor gain. An increase of ISO by one stop, i.e., ISO 800 to ISO 1600, corresponds to roughly twice the sensitivity, so you only need to give the sensor half as much light to get the same pixel value. For the past few years, manufacturers have been making dramatic improvements in low-light sensitivity, allowing maximum ISO values of 50,000 in a good prosumer camera (at least this year, 2022). This makes shooting clouds at night entirely possible. For comparison, the human eye sensitivity is roughly equivalent to ISO 800 .

However, the trade-off to watch out for with high ISO values is noise in the image, as shown in Figure 5. A variety of issues in the sensor and circuitry contribute to make noise an issue at high ISO . More expensive cameras tend to do a better job of mitigating these factors; that’s one of the things worth paying for.

Hands On! Do This Now!

How many megapixels do each of your cameras have? How many pixels wide and tall are your images? You might need to download an image to your computer and examine the file details to see. What is the ISO range for your sensor? How do you control the ISO? Take a test shot at your highest ISO and see how much noise there is.

Exposure Value

We saw earlier that a change of 1 in exposure value (EV) roughly means ‘a factor of two in light’. The formal definition actually refers to settings of ISO, shutter speed and aperture that combine to control the resulting pixel value. EV is proportional to the aperture area times the shutter time, so for a given scene and ISO, if you reduce the shutter time by a factor of two, say from 1/30 s to 1/60 s, and open the aperture one stop from f/5.6 to f/4, the EV and total light hitting the sensor will stay the same.

When making your visualizations you have a choice of shutter-aperture-ISO combinations that result in the same overall EV. How to choose? Consider the side effects: DOF vs. motion blur, motion blur vs. noise, for example. Which is more important in your image? This can help you choose your automatic exposure mode as well: aperture or time priority.

Balancing Aperture, Shutter Speed and ISO

Now we can put it all together, as shown in Figure 6.

Figure 6: Effects of exposure choices.

 

The effects and side effects are very clear when you are shooting in a manual (M) exposure mode, making controlled choices about each of the three factors. However, most of us shoot using a fully automatic or program exposure mode — where the camera makes all three choices — or in a semi-automatic mode, generally aperture priority (A or Av) or shutter priority (S or T). In these semi-automatic modes, the camera will choose to keep ISO as low as possible without slowing the shutter too much for hand-held use, probably no slower than 1/30th second. In contrast, in ‘sports’ program modes, the camera will choose short shutter times, with high ISO to compensate. The algorithms for these various modes may or may not be well described in your manual or online, so I prefer to stick with aperture or shutter priority.

Figure 7: Symbol for exposure compensation: deliberate over- or underexposure. JeanBizHertzberg, CC BY-SA 4.0 via Wikimedia Commons

Cameras give us one final method of exposure control in addition to these three basic factors. Sometimes called EV compensation , it allows you to deliberately underexpose or overexpose your image compared to what the light meter algorithm thinks will make the perfect exposure. It is probably accessed via an icon like Figure 7. You’ll be presented with a scale; moving towards the positive side will overexpose your image, making it brighter. Moving towards the negative will underexpose, making it darker. I typically leave my camera set to underexpose by 1/2 stop or so; it looks more natural to me and avoids blown highlights better.

Your camera may offer a ‘bracketing’ mode, where it quickly shoots a handful of images over range of exposures, from under to overexposed, to make sure at least one of them is what you want. There are then post-processing methods to use the best part of each image: this is ‘high dynamic range’ or HDR imaging. We’ll look at this more closely in the next section, on resolution.

Hands on! Do this now!

Does your camera offer exposure compensation? How many stops of variance does it allow? Try shooting a bit over and under; did you lose highlight or shadow detail?

References

[1]
“Non-Newtonian fluid,” Wikipedia. Aug. 06, 2024. Available: https://en.wikipedia.org/w/index.php?title=Non-Newtonian_fluid&oldid=1238844816#Oobleck. [Accessed: Aug. 07, 2024]
[1]
JeanBizHertzberg, English:  Consider a bubble in a curved streamline. Assume the bubble is small, but much less dense than the fluid. Let’s say the curved flow is in the horizontal plane - in other words, don’t worry about gravity making the particle fall just yet. Now, what will the bubble path look like compared to the fluid path? A) It will curve to the inside of the fluid streamline. B) It will track with the fluid. C) It will go straight along a tangent to the streamline. D) It will curve to the outside of the streamline. E) It will curve out away from the streamline. 2024. Available: https://commons.wikimedia.org/wiki/File:Bubble_path_in_a_fluid_question.png. [Accessed: Aug. 01, 2024]
[1]
“Upload Wizard - Wikimedia Commons.” Available: https://commons.wikimedia.org/wiki/Special:UploadWizard. [Accessed: Aug. 01, 2024]
[1]
JeanBizHertzberg, English:  Consider a particle in a curved streamline. Assume the particle is small, but much denser than the fluid. Let’s say the curved flow is in the horizontal plane - in other words, don’t worry about gravity making the particle fall just yet. Now, what will the particle path look like compared to the fluid path? A) It will curve to the inside of the fluid streamline. B) It will track with the fluid. C) It will go straight along a tangent to the streamline. D) It will curve to the outside of the streamline. E) It will curve out away from the streamline. 2024. Available: https://commons.wikimedia.org/wiki/File:Particle_path_in_a_fluid_question.png. [Accessed: Aug. 01, 2024]
[1]
World Meteorological Organization, “Cloud Atlas,” International Cloud Atlas. Available: https://cloudatlas.wmo.int/en/home.html. [Accessed: Mar. 29, 2023]
[1]
N. & U. A. F. J. T. H. / E. S. / M. J. O. / M. J. Vanderhal, English:  Air-to-air photography of a Northrop T-38 Talon in supersonic flight over the Mojave Desert reveals air density changes caused by flow regime transition around the aircraft, and the turbulent exhaust of the aircraft’s jet engines.This photo was acquired using a technique named Air-to-air Background-Oriented Schlieren (AirBOS). The process involves imaging with a high-speed camera mounted on the bottom of a Beechcraft B-200 King Air aircraft while the T-38C passes underneath. The pattern formed by the desert ground underneath the aircraft is filmed separately, and then removed digitally from the captured images during post-processing. This reveals the distortions created by the shockwaves, which result from the change in the air’s refractive index caused by density changes. 2015. Available: https://commons.wikimedia.org/wiki/File:Shockwave_pattern_around_a_T-38C_observed_with_Background-Oriented_Schlieren_photography_(1).jpg. [Accessed: Feb. 27, 2022]
[1]
L. K. Rajendran, J. Zhang, S. Bhattacharya, S. P. M. Bane, and P. P. Vlachos, “Uncertainty quantification in density estimation from background-oriented Schlieren measurements,” Meas. Sci. Technol., vol. 31, no. 5, p. 054002, Jan. 2020, doi: 10.1088/1361-6501/ab60c8. Available: https://dx.doi.org/10.1088/1361-6501/ab60c8. [Accessed: Aug. 07, 2023]
[1]
B. O. Cakir, S. Lavagnoli, B. H. Saracoglu, and C. Fureby, “Assessment and application of optical flow in background-oriented schlieren for compressible flows,” Exp Fluids, vol. 64, no. 1, p. 11, Dec. 2022, doi: 10.1007/s00348-022-03553-z. Available: https://doi.org/10.1007/s00348-022-03553-z. [Accessed: Aug. 07, 2023]
[1]
B. Mercier, S. Hamidouche, R. Gautier, and T. Lacassagne, “Educational Background Oriented Schlieren based on a Matlab App and a smartphone camera.,” 2022.
[1]
Bertrand Mercier, “comBOS: Open Matlab source for BOS,” Aug. 07, 2023. Available: https://www.mathworks.com/matlabcentral/fileexchange/111430-combos. [Accessed: Aug. 07, 2023]
[1]
“OpenPIV-BOS (Background Oriented Schlieren) by OpenPIV.” Available: http://www.openpiv.net/bos/. [Accessed: Aug. 07, 2023]
[1]
Lilly Verso and Alex Liberzon, Background Oriented Schlieren for stratified liquid cases. OpenPIV, 2023. Available: https://github.com/OpenPIV/bos. [Accessed: Aug. 07, 2023]
[1]
L. Verso and A. Liberzon, “Background oriented schlieren in a density stratified fluid,” Review of Scientific Instruments, vol. 86, no. 10, p. 103705, Oct. 2015, doi: 10.1063/1.4934576. Available: https://doi.org/10.1063/1.4934576. [Accessed: Aug. 07, 2023]
[1]
Gary Settles and Alex Liberzon, “Open Source BOS,” Tel Aviv University Turbulence Lab Open Source Projects. Available: https://www.turbulencelab.sites.tau.ac.il/projects-6. [Accessed: Aug. 07, 2023]
[1]
NathanHagen, English:  Optical layout of a single-mirror schlieren system. 2022. Available: https://commons.wikimedia.org/wiki/File:Single_mirror_schlieren.svg. [Accessed: Aug. 04, 2023]
[1]
NathanHagen, English:  Optical layout of a two-mirror schlieren system, showing only the undeviated rays. 2022. Available: https://commons.wikimedia.org/wiki/File:Double_mirror_schlieren_layout.svg. [Accessed: Aug. 03, 2023]
[1]
“SCHLIEREN PHOTOGRAPHY PRINCIPLES.” Available: https://people.rit.edu/andpph/text-schlieren.html. [Accessed: Aug. 01, 2023]
[1]
“Schlieren Optics.” Available: https://sciencedemonstrations.fas.harvard.edu/presentations/schlieren-optics. [Accessed: Aug. 01, 2023]
[1]
“Schlieren photography,” Wikipedia. Apr. 09, 2023. Available: https://en.wikipedia.org/w/index.php?title=Schlieren_photography&oldid=1148932520. [Accessed: Aug. 01, 2023]
[1]
W. Merzkirch, Flow Visualization, Second Edition, 2nd ed. Academic Press, 1987.
[1]
H. Fiedler, K. Nottmeyer, P. P. Wegener, and S. Raghu, “Schlieren photography of water flow,” Experiments in Fluids, vol. 3, no. 3, pp. 145–151, May 1985, doi: 10.1007/BF00280452. Available: https://doi.org/10.1007/BF00280452. [Accessed: Jul. 25, 2023]
[1]
R. E. Bland and T. J. Pelick, “The Schlieren Method Applied to Flow Visualization in a Water Tunnel,” Journal of Basic Engineering, vol. 84, no. 4, pp. 587–592, Dec. 1962, doi: 10.1115/1.3658718. Available: https://doi.org/10.1115/1.3658718. [Accessed: Jul. 25, 2023]
[1]
C. Isenberg, The science of soap films and soap bubbles. New York: Dover Publications, 1992.
[1]
R. Bruinsma, “Theory of hydrodynamic convection in soap films,” Physica A: Statistical Mechanics and its Applications, vol. 216, no. 1, pp. 59–76, Jun. 1995, doi: 10.1016/0378-4371(95)00023-Z. Available: https://www.sciencedirect.com/science/article/pii/037843719500023Z. [Accessed: Jul. 24, 2023]
[1]
FlowVis@CU, “A horizontal soap bubble film drains towards its center, while nonuniformities from undissolved sugar crystals create colored patterns as the film thickness varies.,” Flow Visualization, May 21, 2015. Available: https://www.flowvis.org/2015/05/21/a-horizontal-soap-bubble-film-drains-towards-its-center-while-nonuniformities-from-undissolved-sugar-crystals-create-colored-patterns-as-the-film-thickness-varies/. [Accessed: Jul. 24, 2023]
[1]
“Wave interference,” Wikipedia. Jun. 29, 2023. Available: https://en.wikipedia.org/w/index.php?title=Wave_interference&oldid=1162450191. [Accessed: Jul. 24, 2023]
[1]
Z. Sándor, Magyar:  Fénytörés. 2005. Available: https://commons.wikimedia.org/wiki/File:F%C3%A9nyt%C3%B6r%C3%A9s.jpg. [Accessed: Jul. 24, 2023]
[1]
Nicoguaro, English:  A wave of light reflecting off the upper and lower boundaries of a thin film. 2016. Available: https://commons.wikimedia.org/wiki/File:Thin_film_interference.svg. [Accessed: Jul. 19, 2023]
[1]
“Thin-film interference,” Wikipedia. May 10, 2023. Available: https://en.wikipedia.org/w/index.php?title=Thin-film_interference&oldid=1154105139. [Accessed: Jul. 18, 2023]
[1]
D. P. B. Smith, English:  Shadowgraph of bullet in flight. 1962. Available: https://commons.wikimedia.org/wiki/File:Shockwave.jpg. [Accessed: Jul. 18, 2023]
[1]
“Shadowgraphy (performing art),” Wikipedia. Mar. 17, 2023. Available: https://en.wikipedia.org/w/index.php?title=Shadowgraphy_(performing_art)&oldid=1145180624. [Accessed: Jul. 18, 2023]
[1]
“Optical properties of water and ice,” Wikipedia. Feb. 19, 2023. Available: https://en.wikipedia.org/w/index.php?title=Optical_properties_of_water_and_ice&oldid=1140373202. [Accessed: Jul. 18, 2023]
[1]
“Dispersion (optics),” Wikipedia. Jun. 21, 2023. Available: https://en.wikipedia.org/w/index.php?title=Dispersion_(optics)&oldid=1161233581. [Accessed: Jul. 17, 2023]
[1]
FlowVis@CU, “Sunlight is concentrated into ‘caustics’ by small waves on the water surface, here near a Caribbean beach.,” Flow Visualization, Nov. 09, 2014. Available: https://www.flowvis.org/2014/11/09/sunlight-is-concentrated-into-caustics-by-small-waves-on-the-water-surface-here-near-a-caribbean-beach-2/. [Accessed: Jul. 17, 2023]
[1]
J. Bertolotti, Caustics gif. 2020. Available: https://commons.wikimedia.org/wiki/File:Caustics.gif. [Accessed: Jul. 17, 2023]
[1]
“Caustic (optics),” Wikipedia. Apr. 10, 2023. Available: https://en.wikipedia.org/w/index.php?title=Caustic_(optics)&oldid=1149188519. [Accessed: Jul. 17, 2023]
[1]
“Shadowgraph,” Wikipedia. May 05, 2023. Available: https://en.wikipedia.org/w/index.php?title=Shadowgraph&oldid=1153358078. [Accessed: Jul. 17, 2023]
[1]
H. Wang, “From Contact Line Structures to Wetting Dynamics,” Langmuir, vol. 35, no. 32, pp. 10233–10245, Aug. 2019, doi: 10.1021/acs.langmuir.9b00294. Available: https://doi.org/10.1021/acs.langmuir.9b00294. [Accessed: Jul. 16, 2023]
[1]
“Reflectance,” Wikipedia. May 23, 2023. Available: https://en.wikipedia.org/w/index.php?title=Reflectance&oldid=1156571917. [Accessed: Jul. 16, 2023]
[1]
FlowVis@CU, “Light reflects from standing waves on the surface of water in an ultrasonic cleaner.,” Flow Visualization, Feb. 24, 2013. Available: https://www.flowvis.org/2013/02/24/light-reflects-from-standing-waves-on-the-surface-of-water-in-an-ultrasonic-cleaner/. [Accessed: Jul. 16, 2023]
[1]
Dan Pangburn, “File:Water reflectivity.jpg,” Wikipedia. Mar. 22, 2012. Available: https://en.wikipedia.org/w/index.php?title=File:Water_reflectivity.jpg&oldid=483290200. [Accessed: Jul. 14, 2023]
[1]
Fermilab, Why does light slow down in water?, (Feb. 20, 2019). Available: https://www.youtube.com/watch?v=CUjt36SD3h8. [Accessed: Jul. 13, 2023]
[1]
M. Lloyd, “Michael Lloyd: Clouds First,” Flow Visualization, Oct. 01, 2016. Available: https://www.flowvis.org/2016/10/01/michael-lloyd-clouds-first/. [Accessed: Jul. 11, 2023]
[1]
“Rayleigh–Bénard convection,” Wikipedia. May 04, 2023. Available: https://en.wikipedia.org/w/index.php?title=Rayleigh%E2%80%93B%C3%A9nard_convection&oldid=1153087302. [Accessed: Jul. 10, 2023]
[1]
R. A. Houze, “Clouds in Shallow Layers at Low, Middle, and High Levels,” in Cloud Dynamics, in International Geophysics, vol. 104. Elsevier, 2014, pp. 125–127. doi: 10.1016/B978-0-12-374266-7.00005-6. Available: https://linkinghub.elsevier.com/retrieve/pii/B9780123742667000056. [Accessed: Jul. 10, 2023]
[1]
FlowVis@CU, “Altostratus lenticularis, 2/18/14, 5:15 pm,” Flow Visualization, Apr. 06, 2014. Available: https://www.flowvis.org/2014/04/06/altostratus-lenticularis-2-18-14-515-pm/. [Accessed: Jul. 10, 2023]
[1]
FlowVis@CU, “A persistent spreading contrail below altostratus, color reversed, Boulder CO, March 11, 2013 at 3:45 pm.,” Flow Visualization, Sep. 07, 2013. Available: https://www.flowvis.org/2013/09/07/a-persistent-spreading-contrail-below-altostratus-color-reversed-boulder-co-march-11-2013-at-345-pm/. [Accessed: Jul. 10, 2023]
[1]
FlowVis@CU, “Altostratus at dawn, Louisville CO, February 17th 2013 at about 6:40 a.m.,” Flow Visualization, Sep. 03, 2013. Available: https://www.flowvis.org/2013/09/03/altostratus-at-dawn-louisville-co-february-17th-2013-at-about-640-a-m/. [Accessed: Jul. 10, 2023]
[1]
FlowVis@CU, “A persistent spreading contrail below altostratus, color reversed, Boulder CO, March 11, 2013 at 3:45 pm.,” Flow Visualization, Sep. 07, 2013. Available: https://www.flowvis.org/2013/09/07/a-persistent-spreading-contrail-below-altostratus-color-reversed-boulder-co-march-11-2013-at-345-pm/. [Accessed: Jul. 05, 2023]
[1]
“COLD FRONT - Meteorological Physical Background.” Available: https://rammb.cira.colostate.edu/wmovl/vrl/tutorials/satmanu-eumetsat/satmanu/cms/cf/backgr.htm. [Accessed: Jul. 05, 2023]
Overview 4 – Photography C: Lenses – Aperture and DOF
Overview 4 – Photography E – Resolution