Every Fashion Week I learn something new. This is my tenth or so New York Fashion Week by now and I’m still forgetting memory cards, wandering aimlessly trying to find the backstage entrance at Skylight Clarkson, getting kicked out of areas and wondering why Dropbox doesn’t sync faster on Starbucks WiFi.
My kit this NYFW included my Nikon D800, 16-35mm f/2.8, 50mm f/1.4, 24-70mm f/2.8, and 70-300mm f/4-5.6G Nikkor lenses, Nikon SB-910 Flash and multiple trips to CVS for AA batteries. I rented a Sony a7R II with 85mm f/1.4 from Adorama, which I loved so much I shed a little tear when I returned it today.
As a photographer, you are always learning and this fashion week was my biggest learning experience to date.
1. Having more than one camera makes a difference.
This was actually the first fashion week I used more than one camera body. I don’t like to feel overwhelmed by gear and gadgets so I like to keep it simple. I added the extremely lightweight Sony a7R II with 85mm/1.4 to my kit. My 24-70mm on my Nikon is great for runway and first looks, but doesn’t compare to the beauty that is the 1.4. I elevated my beauty work alone with the Sony, a personal goal of mine during NYFW.
2. Some things aren’t worth biting your nails over.
This was actually something another photographer said to me while I was backstage waiting for first looks at Jason Wu chewing on my hot pink nails.
Fifteen plus photographers were crammed in a hallway at the St. Regis Hotel waiting for models to bolt down the hallway in gowns and stilettos. I’m disappointed with the lighting in the hallway and my inability to move around. My back is pushed up against the wall leaving only 3 feet away from the models when they line up. My nerves kicked in and I started biting my newly manicured nails.
She [the photographer] was right. It wasn’t worth stressing over. I couldn’t change the situation. I just would have to make it work. Make a beautiful shot out of a difficult situation. But isn’t that what fashion week is all about?
3. Celebrities are people too.
I know, shocker! Prabal Gurung’s show emphasized femininity with a finale that left viewers speechless.
Bella Hadid led the pack down the runway to a cover of “Imagine” by the John Lennon in a white tee with the text “The Future Is Female.” Matching black and white tees with other sayings quickly followed. I watched Sarah Jessica Parker hug Prabal Gurung post show. They both turned their backs to the cameras as they shed tears.
SJP posed for a couple shots after wiping away tears. She turns to the photographers and says, “He’s all yours gentlemen.” The pauses and turns to me and says “and ladies.”
4. Welcome to “Photographer Humiliation Month.”
We often get told we have access to one thing and then it changes or that we only get 15 min backstage and nothing more. Pushed and shoved in tight quarters all day to get THE shot.
I learned, however, that photographers have each other’s back during fashion week. We might all be kicking each other out of the way to get the photo, but when push comes to shove [literally] I can count on the backstage vets to have my back.
5. Don’t shoot just to shoot.
I use to photograph everything backstage. Just because it’s there doesn’t mean you need to shoot it.
6. Eat the catering.
I always forget this one. I probably shouldn’t admit this, but when no one is looking I sneak some of the leftover snacks and drinks into my bag on my way out the door. Essential fuel for editing. (Coach had insane chocolate chip cookies with salt on top and Thakoon had cute Rosé in a can).
7. Take on personal projects.
Just because your editor doesn’t want you to shoot runway, doesn’t mean you can’t. This fashion week I made GIFS for clients. I always have fun making them because they showcase my images in a new way and break up my coverage by adding movement.
8. If you see Anna Wintour, immediately click the shutter.
I try to shoot and not think during this one because the more I over think it the more I panic she will say something to me. So I shoot then run.
9. If you can’t find moments, make them!
Ask models to twirl, hold up the bag, and make a silly face. Talking with the models and getting to know them will help you know their personality and as a result know how to direct them in images to get the photos you want.
10. Have a little faith.
I have always been a half glass empty kind of girl. Maybe it’s the perfectionist in me always striving to do better.
Three days or so into NYFW, I was over it. Ready to quit. I hated the photos I was taking. They felt repetitive and old. I was striving for something fresh. Even though I was getting lots of “likes” and “regrams,” I wasn’t happy. I was trying to stay away from the “chaos” and makeup/backstage photos I tend to crutch on.
It wasn’t until Proenza Schouler’s show that I felt like I was producing the work I wanted. Black cords lined the dingy floor of Skylight Clarkson with brick white walls. My face lit up instantly. Sometimes I need to remind myself that not every show will be amazing. I won’t love everything I shoot. But if I leave fashion week with at least five great photos I am proud of, then I’m golden.
About the author: Alyssa Greenberg is a New York and Boston-based fashion photographer. To see more of her work, head over to her website or give her a follow on Instagram. This article also appeared here.
When FujiFilm’s X-Trans III sensor was introduced in the X-Pro2, many users began noticing a strange new artifact in their backlit photographs. Upon further experimentation, it became apparent that the same artifact could also be found in images from cameras using the older X-Trans II sensor.
Many theories have been bandied about in internet photography forums, pointing the finger at specific lenses, certain body production batches, and, sadly but predictably, the users who dared to suggest there could be flaws in the output of a rather expensive camera, but very little information of a technical kind has been published on the problem.
In the third episode of our X-Trans saga (the first was about color detail in the in-camera JPEGs, and the second about luminance detail in RAW processing), I will share some of what I’ve uncovered about the nature of this particular artifact on my reluctant (some might say heroic) journey to become an expert in FujiFilm’s quirky and eccentric sensor technology. Come Sancho, we must slay this wizard who enchants the people’s sensors!
Joking aside, the first thing I’ve discovered about this issue is that it is very rarely encountered in practice by those who abstain from shooting facing into the sun or similar light sources. For those who indulge in flare-filled portraits or landscapes with the sun in the frame, however, it may be a more frequent occurrence.
Let me be perfectly clear: I’m not trying to play-up the severity of this problem by writing this article (it’s only happened to me a few times), only to offer some insight into how and why it happens, sharing what I have learned from many hours spent studying the issue in detail.
Bear in mind that this is a highly technical subject and this article will only scratch the surface of the issue. If you’re expecting a discussion on semiconductor fabrication techniques, electron beam coatings, etc. you’ll have to look elsewhere. The information presented here comes entirely from my own original eye-straining analysis of real-world images.
I’m not claiming it to be 100% accurate. What I call “left” could be “right” etc.—there appear to be no authoritative reference materials published on the matter by FujiFilm or anyone else. (If someone out there reads Japanese and knows where to find the patents, by all means send them my way.)
The nature of the grid
This artifact is particularly interesting because it allows the layout of the X-Trans CFA to shine through, as it were, in the demosaicked image—something which should never happen. (If you look closely you can make out the “X” of X-Trans: uninterrupted diagonal lines of green pixels which criss-cross the sensor.) Not even the camera JPEG output, generated with FujiFilm’s supposedly expert proprietary image processing, is immune to this problem (And, yes, I’ve confirmed that Iridient isn’t immune to it either.)
Due to the complexity of X-Trans processing, the appearance of the effect will vary with the particulars of the demosaicking algorithm in use, but no algorithm will be completely immune from its effects. It may be possible to include special measures to mitigate this artifact in a new algorithm, but this would further increase the complexity and computational load, and come at the cost of resolution and the introduction of new types of artifacts.
Why is there a grid?
First and foremost, the reason that this effect is apparent at all is because the of the particular arrangement of the X-Trans CFA, with larger gaps between same-colored pixels. If a sensor utilizing a Bayer CFA were similarly affected, the presentation would probably be more like speckling than a grid, and certainly wouldn’t show any X’s, and could more effectively be removed by traditional noise reduction techniques.
What causes the grid to appear?
The X-Trans II sensor found in the X-T1 (but introduced earlier in the X20, X100S, and X-E2) was the first to bring on-sensor phase-detect autofocus technology to FujiFilm’s X series of cameras. X-Trans III, found in the X-Pro2, X-T2, X-20, and X100F, extends this concept with a larger coverage area and more phase detect pixels (PDPs).
FujiFilm’s technology adds an additional layer to the sensor, a masking layer between CFA and the photodiodes. This mask is only apparent in the central region of the sensor (the extent being greater in X-Trans III than X-Trans II).
It should be noted that there are many more masked PDPs on the sensor than there are “AF points.” 2.8% of pixels in the PDAF area of the sensor are masked. On the X-Trans III sensor, the PDAF area is 3000×3000 pixels (9MP), containing a total of 250,000 PDPs. AF points in this context are a software construct—the values of many PDPs are be used to determine the focus at a single AF point.
What this masking layer does is block half of the masked pixels from receiving light from the “left” side of the image, the other half from receiving light from the “right” side of the image. When the image at the AF point is in focus, the light from the two sides coincides (is in phase). Each PDP is only receiving up to half the amount of light of an unmasked pixel (1-stop less in photographic terms).
This can be compensated for by doubling its brightness in software, with the penalty of also amplifying its noise. This system is also subject to interaction between medium to high frequency detail in the image and the mask (particularly apparent in feathers and fur), but that’s another problem for another day.
This particular implementation of on sensor phase detect is of the “horizontal” type, meaning it is only sensitive to vertical edges in the subject. Most DSLR cameras, in contrast, have AF modules which include a mix of horizontal, vertical, and, more recently, cross-type sensors. Being limited to horizontal only sensing is a limitation of all currently deployed on-sensor PDAF technologies that I’ve surveyed, and isn’t exclusive to FujiFilm.
But what does any of this have to do with the grid artifact?
There are at two main factors in play, both of which seem to involve this pixel masking layer. The overall effect is a combination of these factors, the precise appearance of which depends on the particular angle/orientation of the flare and the region of the frame the flare covers.
The phase detect pixel effect
To put it simply, it is possible for extraneous light to pass through the lens and strike the sensor in such a way that most of the “left” (or “right”) masked PDPs are not illuminated (although everything else is).
Don’t believe me? Forget about optical inversion for the purposes of this thought experiment (it’s a superfluous complication): Say that a cone of hard light (flare) is shining on the sensor from the “right” direction. This illuminates all of the unmasked pixels, and all of the “right” masked pixels, but none (or few) of the “left” masked pixels. It really is that simple.
When a demosaicking algorithm, even FujiFilm’s proprietary one, attempts to construct a full color image, these shadowed pixels misguide the interpolation, spreading the error out over a wider area, and allowing the pattern of the CFA to show through. Because of the alternating pattern of “left” and “right” PDPs horizontally across the image and the 12×12 repetition of the PDP mask, this effect creates an artifact with a period of 6 pixels horizontally and 12 pixels vertically across central region of the image.
OK, but why is the flare purple?
If you’ve been paying close attention (particularly to the diagram above), you may have already figured that out: the flare isn’t purple, it’s anti-green. Purple, more specifically magenta, is the color you get in RGB additive color mixing when you subtract green from white. That is to say, a mixture of just red and blue.
The flare appears purple or magenta because of all the thousands of masked off pixels on the X-Trans II/III sensor, every single one of them is a pixel sensitive to green light, and located in exact the same place in the CFA pattern (upper right hand corner of that block of four green pixels). When a (white) veiling flare illuminates all of the pixels except for either the “left” or “right” PDPs, this leaves a deficit of green signal.
Note: in the real world, flares do tend to have a color tint of their own, but that doesn’t change the principle at work.
The masking layer thickness effect
The PDP influenced part of the effect only appears in the central region of the sensor where the masked PDPs are, but the purple flare/grid artifact affects the entire sensor. This effect seems to be caused by the added thickness of the masking layer or perhaps some other property of the sensor’s optical stack.
What appears to be happening in this effect is that light is striking the sensor from the “up” direction and casting a “shadow” from one row of pixels to the row below. This is presumably happening in the gap created by the masking layer, between the CFA layer and the photodiode layer.
Pardon the annoying animated GIF below, but this was the easiest way to visualize what’s going on. This animation comprises three frames: The first frame is the (naturally) monochrome RAW sensor data, the next frame is the raw sensor data with each pixel colorized to match the X-Trans CFA pattern, the final frame is the demosaicked image data (where the grid and purple color can be seen.)
This is from an area of the image which would have been uniformly dark (shadow) were it not for the flare.
As you watch this animation, pay particular attention to the top two pixels in each 2×2 block of green pixels. In the row below, you can see the intensity level that those pixels should have, notice how they’re darker, and that the green pixel below a red pixel is a different shade than the green pixel below a blue pixel? Can you also see that all the blue pixels immediately below a red pixel are darker than the blue pixels below a green pixel and vice-versa?
The green pixels don’t appear to cast any kind of “shadow” in this way, only the red and blue pixels do. Perhaps because the green filter is weaker or because of color shifts caused by the various coating involved or some other effect—the physical particulars don’t really matter at this level of analysis.
This pattern affects every 3×3 group of the X-Trans pattern, and repeats on a 3 pixel period horizontally and vertically across the image, creating the bulk of the “grid.”
OK, but why is this one purple?
It should be obvious from referring to the figure illustrating the X-Trans CFA that every third row of X-Trans has an equal number of red, blue, and green pixels. That is to say, it is 33% green. The remainder of the rows are 66% green.
When a 33% row casts its “shadow” on the 66% green row below it, it is removing a significant amount of green signal from the image (the image of the flare, that is) simply because the 66% green rows have a larger contribution to the green channel. This isn’t even accounting for the fact that none of the green pixels appear to cast this “shadow.” This minus-green effect results in the flare appearing magenta overall.
All told, this “shadowing” effect is responsible for the majority of the magenta tint.
Well, since you’ve read this far, I guess I’d better show you some examples. Unfortunately, Sancho and I were unable find any conveniently located windmills (trust me, there are at least a couple of dozen people on the planet who will find this joke mildly amusing), so this plastic flamingo lawn nativity scene will have to do.
Side Note: Below is an artist’s depiction of person who criticizes the artistic merit of example images in articles about camera artifacts:
You may think that the example image below isn’t a good one. Please try to bear in mind that the purpose of the example is to show the purple flare/grid artifact in a real-world context, not to present a composition for artistic criticism.
You may be tempted to point out that the image is out of focus, and think that this somehow invalidates the example. It does not. Indeed, it may very well be out of focus, and if you’ve been following closely you will know why: flare (and to some extent any backlighting) causes the on-sensor phase-detect autofocus system of these cameras to go haywire. The camera doesn’t know what’s in focus. It’s hopeless. I suspect that in-focus examples of this problem are the exception rather than the rule.
The image below was shot with the FujiFilm X-T2 using the Fujinon 35mm f/2 lens at f/4.0 and ISO 200.
What can be done about it?
Unfortunately, not much. From a software perspective, you could insert some preprocessing before the demosaicking algorithm to identify the flare area, add some of the value of the red/blue pixels to the green pixels in the rows immediately below them (assuming the flare usually comes from the “up” direction), thus compensating for the masking shadow.
In addition, you could have a demosaicking algorithm that ignores all of the PDPs, interpolating around them. That would probably get rid of the grid for the most part, and the purple aspect, but doing so would come at a cost to resolution, in particular the green/luminance resolution, an extra quantity of which was supposed to be the saving grace of the X-Trans CFA. This would all be absurdly complicated for a demosaicking algorithm and likely to introduce some new artifacts.
A hardware solution would be to ditch the current method of on-sensor PDAF in favor of something more sophisticated like Canon’s Dual Pixel AF technology (with which such imbalances as described herein are presumably impossible because there is no masking layer and no lost light). No camera or lens yet designed can perfectly reject flare; this problem is less about the flare occurring, which is inevitable under the described conditions, and more about the way the sensor responds to the flare.
It’s worth pointing out that all of these problems could have been anticipated by FujiFilm’s engineers before ever coming close to the manufacturing stage—they just didn’t think it was a big deal. Given that they went on to release three more camera models with the same sensor design after the problem was discovered by the public, I wouldn’t hold my breath waiting for them to issue a recall over it.
It is obvious from the single-pixel extent of the artifact in the raw sensor data that this is a sensor-level effect. The grid/purple flare is not due to internal reflections between the sensor and the lens (although this kind of reflection certainly can and does happen with mirrorless cameras), but to optical or electrical effects occurring within the sensor package itself.
Any precautions to avoid or eliminate flare may reduce the the symptoms, but the disease remains. The underlying problem is exacerbated by presence of the X-Trans CFA, which imparts both the grid-like luminance effect, and the majority of the magenta colored chrominance effect.
As can be plainly seen, the overall effect isn’t particularly noticeable at typical (at the time of writing) Web display resolutions. The purple tint is present at all display sizes, whereas the grid requires magnifications higher than about 25% to become apparent. However, the grid, consisting of high frequency detail, is subject to enhancement by sharpening and other post-processing steps, which may increase its visibility at lower resolutions.
Whether or not you consider an image with this artifact to be completely ruined is entirely up to you—many people consider an image with any degree of flare to be ruined—but this is definitely a lower level of fidelity than I’m accustomed to seeing in similar situations. Furthermore, as already mentioned, due to the mechanisms involved, it is likely that the grid artifact and phase detect AF failure are, shall we say, comorbid and linked.
This artifact is characteristic of the FujiFilm X-Trans II/III sensor, allowing affected images to be easily be identified. I can’t recall another instance of such a complex and distinctive artifact. It is, however, easily avoided by abstaining from photographing backlit subjects.
Is this mere tilting at windmills? I don’t believe so. The problem is real, if infrequently encountered, and having an understanding of its nature can help us avoid it.
About the author: Jonathan Moore Liles is a photographer, writer, musician, and software architect living in Portland, Oregon. The opinions in this article are solely those of the author. You can find more of Jonathan’s work on his website, Instagram, and Bandcamp. This post was also published here.
Many a blockbuster movie and several popular travel photo/video creators out there use something called the ‘Orange and Teal look’ when they color grade their work. Today, Parker Walbeck of Fulltime Filmmaker will explain what that look is, why it’s used, and how to apply it to your creations.
On the surface, the ‘Orange and Teal look’ is easy enough to get: you simply push Blues/Teals into the shadows and Oranges/Yellows into the highlights, creating contrast by using these complementary colors to add depth to your shot. But why Orange and Teal? Why not another set of complementary colors?
There are a few reasons.
The first has everything to do with skin tones. Parker explains that skin tones (with some obvious exceptions) typically “sit somewhere in the orange spectrum,” so pushing teals into the shadows will help skin tones stand out from the rest of the image. It’s a different way to create depth, separating your subject from the background using color instead of depth of field or light.
The second has to do with contrast. This grading technique/style is all about creating color contrast, and Teal and Orange have the highest contrast between their exposure values of any pair of complementary colors on the color wheel. Again: we’re adding depth.
Lastly, the third and final reason is more of a speculation. Parker believes Orange and Teal are used at least in part because they replicate golden hour: warm orange light against a blue sky.
The theory portion of the tutorial is pretty much over by 2 minutes in, where Parker changes gears and explains how to install something called a color look up table or LUT in Premier Pro and use it to quickly and easily color grade footage in this “Blockbuster” or “Orange and Teal” style.
That last bit is more of a sales pitch for a great LUT package he found online, and more applicable to filmmakers unless you’re a big fan of LUTs in Photoshop, but if you’re interested it’s there for you and could potentially come in very handy.
So check out the full breakdown up top. And good luck not noticing this golor grading look everywhere from now on…
Sony is making a lot of their new 100mm f/2.8 GM lens with its Smooth Trans Focus technology. But what exactly is this so-called STF, how does it work, and why does it produce smoother bokeh? This short video explains all.
The Sony video was published by The Pixel Connection on YouTube, and it explains in simple terms how something called an apodization (APD) element helps create “breathtaking bokeh” in your photographs. Essentially, the APD element acts like a circular graduated neutral density filter inside the lens, letting progressively less light in as you move from the center to the edges.
This graphic shows the effect this produces with the bokeh in your images:
In real-world portraits, that should mean the difference between the two portraits below, where the bokeh in the image captured with the STF lens is smoother than without:
And here’s a closeup:
This technology is not original to Sony. The same thing appears in Laowa’s 105mm f/2 STF, as you might have seen in this review we shared a couple of weeks ago. Of course, given Sony’s massive R&D budget and dedication to making their GM lenses the best of the best, one would hope the 100mm GM would outperform Laowa’s more affordable option.
If you want sharp black and white images with fine grain, then you’ve come to the right place!
I’m a bit of a freak in terms of image quality and I love very detailed photos. That’s why I’ve been searching for the combination of film and developer that would get me the best results. The technique I’m about to share is not for every situation and, ideally, you will need either a decent amount of light or a tripod.
The reason behind this is that we need to reduce the size of the grain, and the first step in this process is to use a slow film.
Usually, fine grain films go from ISO 25 to ISO 100. A small grain will automatically result in an increased sharpness as it makes the definition thinner on the negative. It’s the same with digital cameras, the smaller are the sensor’s pixels, the more there are, the higher the definition.
For today’s article, we are going to use a roll of Fujifilm Neopan 100 Acros. I often hear good things about it and wanted to give it a try since a long time. If you are into digital as well, you may have heard the name Acros in the past months. Fujifilm has added a new film simulation in their high-end cameras that replicates the look of this film.
Back to the film version, it’s considered a medium-speed film and can be used both out- and indoors. It’s also known to be very capable for long exposure thanks to its admirable reciprocity capabilities. For those of you who have never of reciprocity, it’s basically how a film reacts when being exposed to light. In other words: it means that different films won’t handled exposure—especially long exposure—the same way.
In this case, the film has very good reciprocity characteristics, which makes it the ideal partner for Astro or night photography. On the other hand, a film with poor reciprocity would not support long exposures very well, and tend to develop some sort of halo effect around the highlights known as “Reciprocity Failure.” If you are interested to read more about this topic, check the definition on Wikipedia.
The second key element for crisp images is the developer. All developers are not equal in terms of grain quality and in this case, Rodinal (aka R09) is known to give fine grain with slow films (this is different with medium speed films). It’s also notorious for being a high acutance developer—this means it increases the grain which results in an increased edge sharpness.
To make grains smoother, some developer use a silver solvent. This makes the edges between grains softer, which results in a decrease of perceived sharpness. Rodinal doesn’t contain such a solvent; that’s why it may increase the grain appearance on some films but, as we are using a fine grain film, there is no such problem.
The last element that will help us achieve fine detail is decent glass. In this case, I used a 45mm on my Hasselblad Xpan, but I’m sure you can get similar quality with cheaper lenses. For this series, most of the images were shot between f/4 and f/5.6 at 1/60 of a second and exposed for the mid-tones most of the time. I’m sure I would have got a little more detail by closing down to f/8, but there was not enough light on this day and I was shooting handheld.
About the development, I went for a standard development as it was the first time for me using Rodinal. If you want to reproduce the same steps here are the details:
Dilution : 1+50
Temperature: 24°C (75°F)
Development time: 8 minutes
1-minute agitation at the beginning and 4 inversions each minute
You can also develop at 20°C, but need to extend the time to 13.5 minutes.
Overall, I’m pretty happy with the results. It gives to these images a timeless feel and classic B&W look. I will certainly order more of this film and experiment with other developer and stand development as well to see how it performs.
About the author: Vincent Moschetti is an Ireland-based photographer who is in the middle of a year-long experiment where he’s shooting only film photography. You can find more of his work or follow along on this adventure by visiting his website or following him on Facebook and Instagram. This post was also published here.
Most still shooters use a ball head mount on their tripod, but photographer Hudson Henry wants you to reconsider. As he explains in the video above, using a fluid head built primarily for videographers will give you a lot more versatility.
Henry’s first experience with a fluid head came during a documentary film shoot on Denali. In order to save weight, they travelled with only a single tripod head, which happened to be a fluid head, not a ball head. When he got back home to his ball head, he found himself missing the versatility and features that are only available on a fluid head mount.
In this video, Henry explains the differences between fluid and ball heads, debunks some myths about how big or pricey or complicated fluid heads are, and shows you how versatile his own relatively affordable setup with a $135 Manfrotto MVH500AH is compared to a ball head mount.
Check out the full video up top or read about Henry’s experience on his blog. If you’ve never considered a fluid head for your tripod, get ready to have your mind changed.
Need to add some fake blood to your photo shoot? Instead of going the digital route, you can easily whip up some realistic fake blood using recipes developed for Hollywood movies. The 2-minute video above is a quick look at some of the fake blood recipes that were used over the past decades.
One of the most famous fake blood recipes was concocted by legendary Hollywood makeup artist Dick Smith, who worked on movies such as The Godfather and The Exorcist. Here are the ingredients you’ll need to recreate his faux blood:
1 quart white corn syrup
1 level teaspoon methyl paraben
2 ounces Ehlers red food coloring
5 teaspoons Ehlers yellow food coloring
2 ounces Kodak Photo-Flo
You’ll notice that the last ingredient is Kodak Photo-Flo, a darkroom product that decreases water surface tension and helps to minimize water marks and drying streaks on photographic film. Here’s the rub: Photo-Flo is poisonous when consumed…
So, you’ll probably want to avoid using Smith’s famous recipe if there’s any chance of the fake blood getting into your model’s eyes or mouth. To make the fake blood less toxic, there are safer liquids you can use (e.g. creamer) as a substitute for Photo-Flo.
If you’re new to film photography, chances are that you’ll get into shooting black and white sooner or later because you have been inspired by the masterpieces of old masters. But before you become the next Henri Cartier-Bresson or Sebastião Salgado, there are a few introductory things you should know.
Seeing the world in black and white is the main struggle for everyone at the beginning, but like with everything else, it can be learned and practiced with a simple understanding of how colors are translated into B&W. The human eye can distinguish approximately 500 shades of gray (well, some are limited to 50, but that’s another story). On the other hand, the scope of colors feels almost unlimited by comparison.
Why are some colors identical when turned into B&W?
Imagine a bus with only 50 seats (and no standing space) that has to carry 200 hundred people at the same time. If they all want to get in, some people will have to share the same seat. It’s the same with colors turned into B&W, there are too many to fit into the 500 shades of gray, so they must be compressed to all fit in the bus. To put this into an image, I’ve turned the 6 basic colors into gray so you can see how they translated in B&W.
We can see that some share the same seat. Look at the yellow and orange: they are nearly identical, so that affects sunset pictures. Another interesting comparison is the red and green: they are almost identical, which makes pictures of poppy field look like a muddy gray landscape… how disappointing!
Does that mean that I can’t take a good B&W picture of a poppy field?
Hopefully not! There are ways to change the way B&W film responds to colors. For this, you will have to rely on colored filters. Let me briefly introduce each of them:
Yellow filter: The classic among black and white photographers. Blue skies are darkened, which helps to increase the separation with the clouds. Other colors like green, red, orange and yellow will appear brighter.
Orange filter: It comes right after the yellow in terms of strength. Blues will become even darker for a more dramatic effect. Most warm colors will also show brighter than greens.
Red filter: This one is the strongest. Red will turn into white and foliage appear very dark. If you want your poppy flowers to pop out that’s the one but pay attention to the background. We can see at the horizon the light green turned also into white. It works best with darker shades of green like in the foreground.
Green filter: The opposite of the previous one. Red will turn darker and green brighter. It’s not very popular because of its limited span of action, but it can give very interesting effect when used on the correct scene.
Blue filter: Another uncommon filter but if you want to brighten blues it’s the one! Warm colors will be darkened and red turned into black, which can help to separate elements in a mixed colored scene. It also increases fog and haze which can help to emphasize a moody landscape.
One important thing about using filters is that they all reduce the amount of light by 1 or more stop. So you must compensate this loss of light when exposing. It varies depending on the filter so refer to the manufacturer’s product information.
Considering contrast when shooting B&W
Now that we know how to manipulate each color, the other element to consider when shooting B&W film is contrast.
Depending on which style you are going for, contrast will play a major role. There are no colors to define the mood of your image so the type of light is probably the most important element to create the ambiance you want to achieve. Direct sunlight can be a nightmare for color photographers, but not in B&W. If you want to shoot street photography, for example, it’s exactly what you are looking for as it will create contrast and harsh edges in your image. It will help to detach the subject from its environment and re-enforce your composition.
If you prefer a softer ambiance, look for an atmosphere with low contrast. Cloudy or foggy days are perfect for this type of images. The light is evenly distributed which result in a mellower ambiance. It’s also the ideal situation for shooting female portraits, as it makes skin looks softer and more pleasing.
Another crucial element that affects contrast is the type of film you shoot with. B&W films don’t react the same way and it’s important that you choose the proper one based on what you are looking for. This is really a matter of personal tastes and there is no right or wrong film here, just the one you like.
If I want to go for a contrasty image, Ilford HP5 or Kodak Tri-X are my go to films. If I’m aiming for a softer image, Fomapan 200 or 400 is the one I prefer.
“There are so many films, which one is the best?”
Choosing film can be overwhelming when beginning so if you are not sure about which one you should use, check out the “Film Dating” quiz I created. It helps to find the right film for you in just a few clicks.
The last point that will influence the result of your image is the development technique or chemicals you will use. There are many ways to go when developing and the combinations of film/developer can completely change the look of a negative.
I’ll take the example of stand development, as that’s the one I’m more familiar with. Depending on the film and developer you are using, it can completely change the contrast of your photo. I have tried this approach with Fomapan 400 (low contrast) and Kodak Tri-X (high contrast).
When developed using the stand technique using Ilfotec DD-X developer, Fomapan 400 turned into a super contrasty film. On the opposite, Kodak Tri-X, which is known for being contrasty, turned into a flatter image with this process. These are just examples and combinations are infinite when developing. The best is to experience yourself with the chemicals and films you have at home. If you want more information about developing time for each film and chemical, check out this Massive Dev Chart.
We’ve now seen that many factors can influence a B&W image, but the most important point is your ability to see the world in monochrome. That’s what requires the most practice but with experience, you’ll become better — it’s just a matter of training your imagination.
If you are just starting out, forget about everything else and just concentrate on imagining a scene in B&W. Once you’ve gained more experience, it’ll be easier to apply what you’ve read above.
About the author: Vincent Moschetti is an Ireland-based photographer who is in the middle of a year-long experiment where he’s shooting only film photography. You can find more of his work or follow along on this adventure by visiting his website or following him on Facebook and Instagram. This post was also published here.