LongCut logo

From Print Stock to Digital Workflow - FilmLight Masterclass at Camerimage

By FilmLight

Summary

Topics Covered

  • Film's precise color calibration: A 20-year journey
  • Film labs' analog past: 24-hour turnaround for image reality
  • Bridging analog to digital: True Light's color space mapping
  • The challenge of accurate film reproduction: Labs, projectors, and monitors
  • Beyond film: Chromogen's modern approach to color rendering

Full Transcript

Good morning um and welcome to this um very special seminar. Um it's a great honor. It's every every year a great

honor. It's every every year a great honor. I'm always a little bit more

honor. I'm always a little bit more nervous when I'm giving talks at camera image because it's a a tradition and so many people are coming and it's it's an honor. But today it's even more an honor

honor. But today it's even more an honor especially for me because today I share um the stage with um two incredible people that um have been longtime mentors of myself. Um I have a few other

mentors also in the room like Wolf Gang Lamp who is is there or Charles Charles point but today um yeah I share that talk together with Dr. Richard Kirk who

is scientist at film light and also with my uh long friend Andy Mot. Um, Richard

taught me a lot of what I know about color science and and Andy taught me everything about baselines. So, I'm

really grateful. And um, today we um have a talk which is a gradient from science over engineering to art history.

And I think this is also reflects the DNA of film light that we have everything from the um we try to understand everything on a deep level but then also try to make it work in

practice and then see um see how all of that serves at the end of the day the artistry and I think that's a very special configuration we have have there

um yeah um Richard is um responsible for the um the true light um color management system um which is a which is a standalone own application for um

calibrating um uh uh printing labs or film labs and um and over the years we have merged all of the technology also into baseline. So Richard will talk um

into baseline. So Richard will talk um about his journey of the last two decades or even more about uh yeah what he found and I think it will be a lot of interesting anecdotes and then I will

take on and and talk about how we can take that knowledge and and apply it in in um in modern workflows and then Andy will create some nice pictures as

always. Um okay so with no further ado

always. Um okay so with no further ado uh Dr. Richard Kirk it's your stage.

Okay, I'm going to be talking about uh True Light. This is a product that uh we

True Light. This is a product that uh we developed uh when filmite started because film

worked with uh you edited your images, you looked at them on a monitor and if you could get them ready by 4:00, then somebody would record them out to a

piece of film and it would go off to the labs and it would come back at 10:00 the following day and you would find out what your images actually look like.

This was the year 2000. This 2002 if we start uh our clocks when film light started. But for a long while things had

started. But for a long while things had been stable. Film white had been about

been stable. Film white had been about 50 nits in 1930 and the actual white point varies by quite a bit but was

believed to be around about six uh 6,000 Kelvin. The CIE standard observer came

Kelvin. The CIE standard observer came from 1931 which is a very long time ago.

They uh did not have any good shortwavelength primaries. Video

shortwavelength primaries. Video primaries come from about 1960 but have uh okay the uh the PAL ones are are

pretty much unchanged and good contrast ratios on film were about 2000 to1 and we had a long time a lot of trouble trying to match that on monitors of the

day but people were using computers. You

could grade an image on the monitor. You

write an image to film but the monitor image did not match the film in some ways. But there were things called

ways. But there were things called telescining machines or data cine machines. And here is an image from the

machines. And here is an image from the very earliest days of film night. What

we did was if you got a teley city machine that was working with live video, it could put out live video and we had the true light box which in the

in the middle applied a 16x6 by 16 cube to live video. So we could arrange it so that if we were in uh if the data RGB

coming in was assumed to be sineon space uh a space based on the negative density

then all of the uh controls on your pogle or whatever it was workstation would appear to be working in sineon and there they have got the data that was

common to the things could be output to a film film recorder, then put out film and put on a projector or in our side

they could uh be put out to a uh a digital projector or to a monitor. And

so we had to make a cube that uh incorporated all the color changes that went from there to there to there to there to there to

projector.

And here is uh another old diagram that came from the manual that went with True Light. Now, don't be too overpowered by

Light. Now, don't be too overpowered by this. We start off with here are the RGB

this. We start off with here are the RGB data in our image. And right at the bottom there is the uh display RGB data.

If you record to a negative, we get to status M density, the negative density.

We could if you wanted to do printerite changes by putting an offset within this square. The arrows are generally

square. The arrows are generally transfers from one color space to another and the rectangles are work done in a color space. Then you would print

which gets you to status a density. Then

we did a lamp calibration that says if you've got this density, how does it appear on a screen using a real projector?

And the way we did a lot of this was when you record out to negative we would record out a sole series of color patches which was a 9 by9 by9 bodyed

latis. So it's about 1,400 and something

latis. So it's about 1,400 and something uh patches and we have a few test examples here which we could uh uh you can come and have a look at or or we

could hand round. Uh but it's it's a fair way in. Yeah.

Do do you all want to have a lot in your hand?

>> Yeah. Okay. All right. Let's do that.

>> Let's do that. Okay. Um, what this meant was if you had a uh if you had too many patches, then the

differences between nearby colors became hard to measure and experimental errors meant that the transform was no longer smooth.

what we can uh we had also an interpolation that could take a random array of points and would interpolate through the nearest points.

Uh it's based on a process called creing though I thought I'd invented it at the time. So if you've got a list of input

time. So if you've got a list of input and output colors, say take for example status M to status A for the patches that characterizes the print thing, you

could invert it by swapping the input and the output and then Trueite would get all the transforms which we the list we described which is this central line

down here and make the 16x 16x6 cube by passing each point of the cube through all of these transforms. There various other things you've got there like if you recorded directly to

print stock you could go you could jump that middle stage you had a camera calibration you could uh jump two steps and in the middle we've got a visual

match between display lab which we'll come to later the film calibration oh 1421 that's the right

number I remember it now uh we measured the status and lenses on an X-right TR310 which had a

motorized deck so you could just uh leave the thing going and it would measure them all.

Uh the core water calibration was a list of the RGB and status M values. The

print calibration was a negative status M values and the corresponding print status A values. And for a while, oh about uh almost two decades, a whole

chunk of the film industry was depending on this one densitometer, though we did have a spare in case things went badly wrong.

The other thing just to illustrate the sort of flexibility we had in the process, you could do densitometer calibrations. If I measured the film

calibrations. If I measured the film strip on one densitometer, I could then measure it on another densitometer and this will give us the status MA values

on one densitometer and the second densitometer and you could use it to correct a second densitometer. As it

turned out, this wasn't needed. The

TR310 there were enough of them around.

We had two of them, one that was made 30 years before the other one and gave almost exactly the same results. So

there was no point to look around uh looking around for another densitometer when we got something that worked perfectly well. But exactly the same

perfectly well. But exactly the same approach could be used to match scanners. So we could take a north light

scanners. So we could take a north light scanner, scan in a test strip, measure the RGB values in there and that was uh normally measuring in density and we

could find out the differences between it and what and the uh the spectrometer.

And we actually tried this Another part which has shaped what we do is the Z view tool. We wanted you needed a way of viewing these lists because it

was very easy to have a list where what are the points had come out badly or there were there was a measurement error or the serial line talking to the

densitometer had actually picked up a rogue value. So this is able to display

rogue value. So this is able to display the list here. We are looking at this is

uh the recorder calibration. So

uh the colors are the RGB values of the input and the positions are the corresponding status M values. And you

could you could flip the thing over if you wanted to see it the other way round. So you can see that there is a

round. So you can see that there is a fairly smooth mapping of the RGB values to density values. On the bottom we have

got the uh transform to status A and this has got the uh print stock tone curve in which has got this S-type shape. So you can see it bunches up

shape. So you can see it bunches up towards the white point or towards the the light edges and towards the dark edges to we didn't use Z view though it

has been incorporated in views from base light but it was very useful when you had done a set of uh calibrations for somebody. I used to this used to be my

somebody. I used to this used to be my first first thing in the office job is to see if there was a new one and set it going before I had a cup of coffee. uh

it was useful to g have a quick eyeball of the values to see whether something had gone wrong.

So we can measure other film processes.

We described how that would measure the status M density that would get us to the status A density and we could if we

were doing a butterfly test we could do things where we would measure the status M density on the LED patch and that would give us a gray that we could get

exactly right and hopefully everything else would come out right. But there's

no specification of the film process.

The print process could be uh bleach bypass or cross-processing wall. Um most

films are done where you uh the grading is done where you have get the negative recorded out and then you get an answer print out of that. But the actual film

release will probably mean you make a copy of the negative. So it's a second generation negative you're doing. But

that doesn't matter. You can make a second generation negative and calibrate to that and then see what the print from that comes out like if you wanted to.

And then it was easier to modify true light. So if you're recording, somebody

light. So if you're recording, somebody wanted to record direct to uh I think it was ectochrome or something like but uh you're using a reversal stock. So the

thing came out directly uh in the print stock and that was easy to do and cameras were we tried calibrating cameras. We had a

an entire method for doing that that never was widely used but uh because usually so far hard to find out what's going on inside a camera. It's easier to

find out what's going on inside a piece of film madly mad as it may appear. So

anyway, we've got uh also having got these autochromes, you can say given the RGB values, we can get from there to an exposure on the negative, but we could

also say, well, we know what the tone curve is on something like uh oh a black and white trianic stock or something like that. So you can say this is what

like that. So you can say this is what we would have got as an exposure and then we can work out

processes like uh die transfer for um technicol or there even sort of hand tinting of black and white films or

autochromes or stuff like that.

And another thing we had was uh the printer lights which people like being able to alter printer lights because that's what you could do. For you not familiar with printer lights. If you h

have a film lab then there's a whole bunch of parameters which just vary the amount of processing the film gets. It

gives you an uh if the uh film chemicals are getting old or the bath isn't up to temperature or you're not doing enough stirring or the film is not

in the bath long enough then the print stop will come out lighter. You won't

get up to the density. So there are a whole load of corrections which you can vary the bath temperature, bath speed and film transport rate and printer

lights is the amount of light you're you're putting through the negative when you're doing the print. And those keep uh the whole process roughly on track.

And if there is something else and there were a whole lumber of other ways in which the chemistry could break down, then there was no correction for that.

We just had to flush the lot away and have another go.

The true light pest patch, we we generated these and it says codec led default gray value 445 445 445 should

print as 109 106 103. that's in the sineon thing uh standard and we would actually measure this test if we wanted to do a butterfly test where we would

have half the screen would be showing a projection and half the screen would be uh showing our simulation on a digital projector so we could get the colors to

exactly match in real times what would happen is your uh print would come back from the labs and often it would be within 02 or something like that which

if you were lucky as it turns out we have also found that uh some labs I will they'll remain nameless printed

everything at to density 1.1 1.1 1.1 and he said well why is it this far out so we always do that people always do that you can say it's a that that's just a

nominal print dead no it isn't this is what we asked for this is what we get in the end it turned out to be the

equivalent of uh Van Halen insisting on uh the dressing room should have a bowl of M&M's with the the brown ones taken out. And the the reason for doing that

out. And the the reason for doing that was not because they hated brown M&M's, but they are going to be sitting uh performing on stage underneath 20 tons

of light held up by wires. And they

wanted to make sure the people doing the rigging could obey simple instructions.

And the labs got to learn that one of these at the head and that uh the end of the film would mean that somebody would come back if they weren't printed to to standard.

There are lab process variations as well. One of the things we did discover

well. One of the things we did discover is that the uh print was significantly darker and slightly bluer at the edges than on the middle. Okay, somebody if

you can remember how film used to be, it could actually be half as bright in the corners just because the projector variations. This was considered to be

variations. This was considered to be acceptable. Whereas these days that

acceptable. Whereas these days that would look pretty mad. But uh the edges of the film, they are the perforations stir the bath.

Therefore, you get a bit more chemistry happening and therefore we could calibrate for the middle and the edges separately. We even tried this at one

separately. We even tried this at one point, but it wasn't very exciting.

Another thing we found out was the first turn on the spool in the film recorder, if you got an AR laser that was sitting overnight in Britain, uh, often film

processing in California works with an average humidity level of about 20% instead of 90% in London or 110 it felt like. So, uh, it would pick up more

like. So, uh, it would pick up more moisture, um, and more moisture made the film slightly faster.

So, what you would want to do is record off the first turn or so of the film so that you got past this uh, extra moist stage and then everything became fairly

standard.

It's of course always useful to stick another series of test patches at the end just to check that the brightness hadn't changed or something like that.

But in the end, if you couldn't get this patch right, uh, or rather if you got this pitch right, but other things changed, and there was a terrible story he had of,

uh, trying to get one of the Harry Potter films out, and one of the labs, somebody had done a weld on the corner of the tank cuz it was leaking. And

there was some sort of ionic. Nobody

could ever find out what it was, but there was a weird distortion that kept changing in the color spaces. And I

think they eventually had to drain the tank and throw it away.

Another thing we the stage we had to do was the projector calibration. Now what

we can't easily do is to take a big long strip and uh of film and say for each of our color patches we will project it and we'll measure the brightness that's on

the screen. So what we did was we

the screen. So what we did was we measured the uh the the open gate white coming out of the projector and then our film we would

measure our film on a light box with a broadband s. So you got the transmission

broadband s. So you got the transmission uh spectrum and then we're able to work out all of the conversions from status a to lab relative to the white for a

particular projector. And this one is

particular projector. And this one is showing the difference between two projectors. One is for the original

projectors. One is for the original projector which is I think it's this point and the other one was an ARI lock

pro. The lock pro was a desktop

pro. The lock pro was a desktop projector that had been done for X-ray angography. So it was motion X-rays. It

angography. So it was motion X-rays. It

was used in hospitals but it was also the only thing that could project 35 mm film in a small grading studio. So they were

widely used but they had got a funny lamp which had sort of got a greenish appearance but you could adjust it so that the it was slap on D65

and yet you would find that flesh tones around here would look grayer in fact even slightly green. I don't know why they picked that particular lamp because

we found other small ones that worked but uh and until we did this we just had this nagging feeling that was something wrong.

a whole lot of true light. What is

digging up silly things like that? One thing we did try, you could try measuring the the transition spectrum of each patch in the

projector, but that would mean well a spectrometer typically will take something like 20 seconds to make a measurement. So that would be

measurement. So that would be uh a lot of frames, hundreds of frames for each patch. So, we're talking about

each patch. So, we're talking about 100,000 frames to run a test. You can't

do that. And anyway, the thing will be flickering, which some spectrometers can't do it. The other thing we also find was that arc lamp clamps are not stable. In fact, they're not stable in a

stable. In fact, they're not stable in a very cunning way. Discovered that an arc lamp can have two stable positions for the ark. So, you start doing a series of

the ark. So, you start doing a series of measurements. So you measure the white

measurements. So you measure the white at the beginning, you do a series of measurements and all of a sudden the brightness will jump by 5%.

And it will stay up at 5% with the arc in the other position and it'll jump back again and then you'll measure the white afterwards and say it's agrees to 0.1 of a percent. But it doesn't not for

the ones in the middle. So we had to uh find ways of in the end we did all our measurements with a tungsten source because those are beautifully stable and

measured each one uh patch uh a transmissions uh strip using a spectrometer. This took a long time and

spectrometer. This took a long time and I think uh other people have done this too. I'm told that uh I think company 3

too. I'm told that uh I think company 3 in America did uh a calibration of a film strip that took them 12 hours and

the results are pretty much the same which but there is a thing called the calier effect. Now you hardly see this

calier effect. Now you hardly see this with color film but film can scatter light as well absorbing it. The uh calia constant is uh a rough correction for

this by scaling the film density because each bit of grain or uh dye will be scattering as well as

uh absorbing the scaling depends on the projection optics. If you've got a different f

optics. If you've got a different f number, then it doesn't matter. Well,

okay, most projectors are about f over four, so it doesn't matter too much. But

if you're trying to reconstruct something like the freeze green process and it becomes important because they'll be using a different lamp and they'll be using a different f number because

there's not enough light or I think we actually did try measuring the cali effect on color film and it's

pretty tiny. It's uh the the scaling we

pretty tiny. It's uh the the scaling we scale by 1.015 or something 1.04 know four in some case it's hard to even measure it and it's a

sort of difference that our eyes usually take into account. So we've had a look it and it's you can see it if you know where to look but it's rather tricky to

do. There are other effects that are

do. There are other effects that are much more significant.

Part of the things is this is what a monitor used to look like. It was a great big thing like a fish tank and weighed about as much. the Sony BVM

monitors. Uh, well, you used not to have

monitors. Uh, well, you used not to have two of those in a room. You say,

somebody's wisely said that a grading studio with two monitors is not a grading studio because people are always looking at the wrong one. Well, you

couldn't look at the wrong one here because those cost too much, so we wouldn't have to. And there was a wide uh 16x9 version which I know one place

in London had to have the floor strengthened before they had it put in.

So you don't have two of those unless you want to end up in the basement.

The rest of the story was different. You

would have a lot of people you see was there was a Sony uh uh I can't remember it was a it was a big square thing but without it was a

normal monitor.

Oh, how could I forget that number? I

spent hours in front of these things and you would get the thing looking just right and then you come in the following morning you turn it off. Uh no, no, you never turned it off. That's the thing.

who kept them permanently on so they didn't drop and you came in in the morning and all of a sudden the blacks were gray. Somebody had unplugged it to

were gray. Somebody had unplugged it to plug in a Hoover or something. We never

found out, nobody ever admitted, but they would be right for two weeks and then they would be hopelessly out the following day. So, you had to keep

following day. So, you had to keep calibrating. And another thing we also

calibrating. And another thing we also had was a test strip you could put up.

So, it's a gray stripes had got black and white stripes and it surrounded with the gray. And the black and white

the gray. And the black and white stripes should have the same average brightness as the surrounding gray. And

if it wasn't, then you knew you had to go and recalibrate the thing. So you

didn't spend an entire morning working on a badly setup monitor and then discover it afterwards. We also in the early days had this gadget which you open the lid on it, stuck it on the

monitor, and it would uh then ran a calibration program that put uh the thing through all the various colors.

The monitor probe had four sensors which allowed us to get CIE XY Z including the two peaks in the red.

Uh this was we had customuilt uh uh dicroics put over 1 cm square detector so it was nice and sensitive.

It could go down to uh a mill uh a thousandth of a nit.

The other thing we had to do was the video tone curve. We had rec 709 at the person which had a perfectly good tone curve standard for but it's a tone curve standard for

cameras. There was no wreck 1886. There

cameras. There was no wreck 1886. There

is m people in the audience who knew that. So

we went and measured a number of Sony monitors and found they were completely different which was an understood thing.

the the monitor was designed the TV well sorry the rec 709 signal was made so

that if the this should look nice on a a monitor in a semi- darkened room so it wouldn't have a light surround but it wouldn't have a dark surround

either so we what we ended up doing was we made a Sony HD calibration which fitted all the good Sony BVM monitors in Soho there average about five of them

and they weren't all like each other.

the they had been they would be set up properly using there's a big thing called the Philips uh probe which had a big suction cup like a like a sink

unblocker that uh and uh you'd uh the dark end would be set up using a pluge test which something the

BBC produced which was a uh you based on the visibility of stripes in the end this work this this monitor worked Well, when all the displays had video like

primaries, and for a while they did.

You've got a plasma display that used exactly the same phosphor, but everything went badly wrong when Barco bought out the DP90P, which had

got a wonderfully deep red. So when you were sitting in a dark room doing a cal uh in the studio doing a calibration, you would calibrate the uh Sony BVM

monitor and it would go through the uh red, green and blue uh calibration and then you would turn on the DP90P and

that would put on this screen of red and the Sony monitor used to look orange and mid orange compared to it. Well, the

problem with this is that your vision peak uh is a broad peak where the various

lines in uh the red foster more or less fit on that. But the DV90P red was off to one side. So it only required a

little bit of slope to mean that what your red thing uh uh what the uh the deep red measurement was was really quite badly

out. So we had to build the projector

out. So we had to build the projector probe. This had uh a mirror at the back

probe. This had uh a mirror at the back and it had a spectrometer and the four detectors so you could measure colors.

It would calibrate itself for this particular display and set it up so that it could get the color measurements accurate using a spectrometer and get

the sensitivity of the colorimeters.

But uh this isn't just a matter of making a piece of kit that works. Our

eyes are like colorimeters. If there's a problem with the projector and you've got a narrow band projector with a very deep red and a very deep blue and we see them differently uh the sorry the the

projector calibrator sees them differently our eyes will see them differently too and we're beginning to find out quite how many people have worked in the industry for 20 years with anomalous vision and have never known

it. Plus

it. Plus 10% of the or 5% of the audience may have anorous vision. So, we can't just say, "Oh, well, you've got funny eyes.

That's tough. Go and see a doctor."

Viewing conditions is another huge bit.

We touched on this when talking about the video standard. Film was viewed in a dark room. Videos viewed against a

dark room. Videos viewed against a backlit gray wall in standard grading studios at the time. And this sort of illustrates what's happened. These

patches are all exactly the same color.

There's a similar presentation where the uh the patches have got a graded shading which is even more dramatic. But I'm

doing no funny business. This is exactly the same. That and that gray are exactly

the same. That and that gray are exactly the same.

This is what happens when you've got a reference gray. In general, I would

reference gray. In general, I would advise having a black surround because black is cheap. There's lots of black and everybody agrees what black is. It's

no light. whereas trying to get uh one nits worth of uh D65 gray is really rather tricky.

So we had to compensate for all of these differences. There were known vision

differences. There were known vision models. So we look looked at various

models. So we look looked at various ones.

What we picked was uh one that Kodak used done by Bartlesson and Brenamman in the 1960s. That was for viewing black

the 1960s. That was for viewing black and white stuff and that seemed to do what we need. When I plotted it out, I found out that it act the contrast went down as things got brighter and brighter

and then it started coming back because they had plotted it using log log paper and so I fitted a different function to the actual real points they had and extrapolated in a slightly better way

and it was a very simple thing but it more or less does the right sort of contrast stretching. I then tried using

contrast stretching. I then tried using things like the CCAM O2 model which is still built into True Light and then the later CCAM was it 15. Um we tried that

and the results were pretty much the same but we use uh CIE lab for matching color with respect to white and that

seems to work. The reason for not using CCAN is that they've got lots and lots of parameters and many of the parameters aren't things you can objectively

twiddle. We stuck to a nice simple model

twiddle. We stuck to a nice simple model for doing contrast matching that will be you've got um a surrounding par a

parameter that describes whether surround is light, medium or dark and uh that's it and that's uh it's simple

enough that it can be wrong rather than you've got hundreds of parameters.

Something to think about what about is normally we look at films and the surround is dark. What should we do for IMAX? The surround is more film. That's

IMAX? The surround is more film. That's

the whole idea of it. So, do we have a different tone curve for IMAX? Nobody

has done it.

Anyway, there's one parameter we've also got within True Light. Uh it's called the contrast control. And this is not usually included in color appearance

models. A dark film. This is my

models. A dark film. This is my impression of what uh dark film looks like.

Uh projects as a moving texture. You can

see that things are happening. It's

busy. Something is going on. Therefore,

it is not black. You can stick up your hand in front of it and it will cast a shadow. So, you can see that there is

shadow. So, you can see that there is light. Whereas if you stick up a thing

light. Whereas if you stick up a thing on you stack up thing as a CRT in it say it would have a big sheet of glass. The

big ones the glass could be 15 mm thick.

So you moved your head you got reflections from the front but those are obviously not part of the image. If the

image had got a uh.5%

glow on it you didn't see it. You

discarded it. You saw it as being black.

So we had to put a contrast in to say if we know that the film black is not black, we've got to then add a bit of brightness to the simulation to give it

the same sense of not being black.

And it does and it's needed for things like butterfly tests. It's usually

confined to the really dark end film and it's uh not terrifically helpful. But if

you're wanting to say I can match film, you've got to show people that you can match film.

Now there is another thing that we did.

We rarely know the spectra in the scene.

If you're looking at this is the uh Marcy image.

Uh we don't we vaguely have an idea. It

was probably shot with a roughly D65 balance. We don't know the spectra

balance. We don't know the spectra coming off these leaves or things like that. But what we can do is for every

that. But what we can do is for every color we can make if you've got say an lab or XY spectrum you can we can

calculate the smoothest reflectant spectrum that would fit the XY Z values and we've got a whole set of those. We

can then say for each point in here if that was daylight supposing we rec replaced daylight with some other light and here we've got a

low pressure sodium light simulation uh then we can uh simulate what a change in light source might look like. And

this in some ways is better than shooting a scene under sodium light with film that had been developed for ordinary light levels. If you take uh

if you replace the spectrum with something that the film is not used uh used to seeing, you can get unexpected effects. The most dramatic was color

effects. The most dramatic was color polaroid. If you got a color polaroid of

polaroid. If you got a color polaroid of sodium light, it comes out as green.

Uh there just less surprises this way.

And this came onto a thing called the triangle transform. You've probably

triangle transform. You've probably heard about cubes which um everybody is a 3D lookup table. Here's a 2D lookup

table. What do you do is you take the

table. What do you do is you take the color, you scale it to so that r + g plus b equals 1. You then interpolate

through a triangle of points and then you can uh uh then add back the scaling. This is faster and it uses less

scaling. This is faster and it uses less data points than the cube transform. So

you can stick in tighter data points to cover. Here is the triangle transform

cover. Here is the triangle transform for doing the conversion uh the changing the illuminant conversion we showed in the last thing.

And you can see it does have oranges and it does have greens.

Uh another sort of triangle we said could correct sea lighting. So if you've been using a cheaper LED lighting and you wanted to make it look like daylight because but the spectrum was slightly

different, you could correct using this sort of thing.

Now I'd like to talk about some extreme colors. this saturated colors uh that

colors. this saturated colors uh that we've got around the outside lie beyond the reflection colors and it run the and in the middle I've done this square

because what I'm looking at is a slice through the RGB cube.

Uh, so we've got the really saturated reds, which are not showing up as being very different on this display. And

that's okay because by and large with most color spaces, the color spaces around the edge have got

less uh visibly different colors than uh the volume of the color space would have you believe. It's uh the the inside like

believe. It's uh the the inside like like the TARDIS is bigger than the outside.

So um in quite a lot of sense is the video sense we uh has most of the uh the reflection

gamuts the colors you can get in reflection colors. There aren't paints

reflection colors. There aren't paints that just reflect 500 nanometer light.

If they did, they would be very dark ones because everything else is almost some other wavelength. So, there aren't many saturated colors that we can see.

The ones we can see are things like uh car headlights or LEDs or colored LEDs or things like that. And we don't see them as absolute colors the same extent.

If you go to the cyan quarter, a typical um sRGB display will only go out to about 50% of the possible saturation for

500 nanometers uh azure, but that doesn't matter. We

don't seem to miss it. So, the saturated colors have only got limited interest.

So, it would be kind of nice to have a color space that can do them. the deep

shadows. You can stick stuff in the deep shadows, but unless the whole image goes dark, then you won't see them. And if

the whole image goes dark, it will take about 30 seconds to adapt and then the adaption will just continue on for the next 5 minutes. So, you can't tell a story.

Well, I suppose you could. You could

have uh a prison escape scene where you're in the dark and then the search light comes and blinds you. Yes, that

would be a rather limited application, but there's not much storytelling you can do. I would say there's only one

can do. I would say there's only one interesting corner, the brilliant highlights, where if you have got the video corner here is the sort of white that you would have on snow or p white

paper and that was about 300 nits and supposing that corner here is a,000 nits. That gives you you can have little

nits. That gives you you can have little one pixel wide sparkly highlights. It

will make water look wet. It will make uh crystal glass look sparkly. It does.

It's one of the few good tricks that would is still out there to have.

But uh if you have a big highlight like the setting sun, then that will give you terrible after images and that will also give you after images depending on where the person was looking in the absor in

the audience at the time it appeared. So

everybody will see something different.

So that's these highlight coordinates only interesting provided you keep the highlights small.

Part of the mission for true light was we had to educate people into seeing how it works. There was a manual that descri came with the trite library that

described each control and the controls included a whole lot that we didn't provide knobs for in the user interface. There about

20 25 or so different parameters but most of them you just left at the default settings. The

default settings. The way things worked in true light is the total space that you've got for parameters might have had 30 or 40 dimensions. If you started saying, "Oh,

dimensions. If you started saying, "Oh, I can tweak this to make the the grays look better or the flesh tones look warmer," then you're wandering off in a 40-dimensional space and your chances of

finding your way back to where you ought to be a nil.

Uh so we had from the very early days we said we ought to write down a book where we explain what status m space is status

a space it would say what uh contrast matching is what uh models are and this was planned from the very early days but in fact was the very last bit to be

finished there is uh you can get a download a free PDF version of the whole thing from there if you want to see it

um As for the errors, I can there's I can say there's uh a Peter Doyle story.

I can remember being hauled out to uh try and fix um a grading of a very dark grading of a Harry Potter film and trying uh and

everything was subtly wrong. And so I opened up the true light settings and somebody had poked all sorts of weird things in. So I thought I'm going to

things in. So I thought I'm going to clean all of this out. I'm going to start from scratch. I'll keep a copy of it. We can come back. And we haven't got

it. We can come back. And we haven't got time for this. We haven't got time not to do this. I don't know what any of that is. We went and measured the LED

that is. We went and measured the LED patch and put everything back to where it was and everything pulled straight. It all

worked.

And this is what we found many times with light. if you kept your head and

with light. if you kept your head and stuck with what you could measure and the only bit we didn't measure was this contrast control which made the the blacks in film look lighter

uh everything else it had to be done on measurements there is a thing called TL utils it's an application that actually is I think it's distributed with baselite uh it

still is here's a big long list of the things it does it was developed so every time we added something to true light is

if you're adding a new function. We put

the routine that it used into this and I ran it with test data and it's got regression data so that you could exercise every part of the true light

library. It was also useful so that you

library. It was also useful so that you could make list transforms, edit list transforms, view them and this has been

going on for 20 odd years growing and growing. So some of the things

growing. So some of the things are units is a very old thing which should convert from foot foot lamberts to candelas per square meter which

hardly everybody I think the last foot lambert died in the Bronx Zoo back in the uh but uh

I still use it for if somebody wants to have a new uh transform, I will by hand make up the the transforms using these

sort of tools and other people can too.

There are thing the by eye calibration.

This is relatively new. There is you can download it from the app store for free.

The idea here is that an iPhone has got a very uh consistent build.

Uh I'm not doing adverts for uh Apple, but I tried doing this. Uh the original one was developed for an Android phone and I just found there was so many

Android screen sizes and different cameras that uh there was much more the the uh the developers database was full of friendly people but oh goodness it

was a lot of work and in the end I had to uh go the Apple way. What it does is you're able to adjust this patch by saying you click on red, green on blue

to make it redder, greener, bluer, or you could go backwards here. Red is a sort of pinky color. So, it's the opposite of green. And blue and yellow are opposite. And you can make it

are opposite. And you can make it lighter or dark, or you click the background, which would say uh the nearest match is the color we've already got. And therefore, it will have the

got. And therefore, it will have the contrast. And you can do this repeatedly

contrast. And you can do this repeatedly until the color uh is the color you get.

So if you're at home, what you can do is you can stick your phone next to your monitor and say, I will capture what the white looks like. You can also capture it using the camera, but this is the

last stage where you will fine-tune it to say this is what it looks like. This

is what my monitor white looks like. And

then you go into work and you can take out your phone and say this is my monitor white. What's this monitor? Are

monitor white. What's this monitor? Are

they different the way I thought they were? or is it just because you got

were? or is it just because you got better curtains at work?

Anyway, there it is. It's got uh a vision test and a whole lot of other things. Again,

things. Again, uh it's free. Uh it doesn't do any storing of your data.

It was an experiment. Now, this is almost at the end of the talk proper.

So, I could but there are a whole load of side tracks I could go down. Some of

these I've actually touched on already.

Uh the original True Light box happened before digital video was ready.

So it could take in analog values, converted it to RGB, then uh and then it put out analog video. We only made one of those, but we

video. We only made one of those, but we could do it, which is uh fortunately digital video came in.

But we also then had the ability to sample values from analog or digital vill. You could say I will put the

vill. You could say I will put the cursor on this point and it will say what is coming in which is useful because you often found that people did

uh legalization twice or not at all. In

one case, uh I remember somebody had this big grading workstation and they and said, "Look, it looks like the thing is trying to display legal values on a

full range monitor." Said, "Well, no, we've corrected from that." No, I don't think you have. Look, I can measure them all and here they are coming out. And

then he went to the table. He opened up a little door, pulled out a box, and pressed the button and then put it back and everything was right. I never knew

that that thing existed in there until he pulled it out and pressed the button.

So, it was useful just uh for doing uh some of the sanity checks.

Another thing we had was a gamut alarm.

It was a cube, but it said everything that was in gamut we we did as black and white and we left in color all the colors that we couldn't match. So if you

were doing the Marci image, you would find the red and yellow of the Kodak film packets and the patches down the side used to come up colored and it's a

it was a message that the film can do colors you can't get on this monitor.

There's nothing you can do about it when we weren't didn't try to squash them into gamut.

He just told you that something was going bad. There was a had a call from

going bad. There was a had a call from somebody went round. They were doing some of the Atenburgh life on Earth calibration. They were saying, "Gamth

calibration. They were saying, "Gamth alarm isn't working. I put it on. It's

just the same." It was a picture of the uh the shores of Herring in the North Sea and it was a sort of cyan color of underneath the sea. Every pixel was out

of gamut. So,

of gamut. So, so it was doing exactly the right thing.

It just didn't look like that. was

mystified me for about 20 minutes and then all of a sudden think, "Oh, I think I know what we've got here."

Tone curve test images. This you will see we've got a tone curve test on the uh the phone application. This is just

to show that the tone curve on the display is what you think it is. And if

it and if there isn't, then there's something you have to turn off. This is

gray gray surround and gray and black lines.

Scrillia art mentioned D6. Oh, D65 light boxes. Here's a another thing. Light

boxes. Here's a another thing. Light

boxes were all made. People used to look at film on light boxes. When you got uh dailies in editorials, they would have a

light box. light boxes were always D65

light box. light boxes were always D65 because D65 was used in the uh uh the

print industry and there was a standard for looking at stuff but nobody made D65 light boxes. So I phoned up uh Verify

light boxes. So I phoned up uh Verify and said uh do you do a D65 light box?

No, you don't. But you could make one.

You could be just changing the uh the tubes. Well, no, we don't do that, but you could.

Long pause and think. I'll get an engineer. I spoke to an engineer

engineer. I spoke to an engineer and in half an hour they made um D65 light boxes. That's one of the easier

light boxes. That's one of the easier pieces of R&D and product development that I've ever had. But, uh they still make them. I don't know if they've

make them. I don't know if they've managed to sell more than 10, but uh it was so easy to do that why not. Here's

another thing that we we we tried. We

wanted to have some way of we can do the gray level test chart that'll say where the gray level is the same, but we couldn't find a color way of doing it.

The easiest way is we could have bought a load of Bluetooth uh RGB light bulbs and said, "I've measured one of these and if all the batteries are

similar, we can just get this light bulb set it up to this RGB and saying this ought to be the same as your monitor red and if it isn't then we need to correct something."

something." Uh the other things I think I'm almost wound up here. I'll end up we had uh

uh this was a simulation of an early F Fuji film stock where I I replaced it with the Fuji uh intermediate because the intermediate

film do not have the crossoupling uh bits and I was thinking this will be what the early Fuji film must have looked like and it it almost got it spot

on first go. So So that that's a good sign. There was hand tinting that we did

sign. There was hand tinting that we did for Greg Gatsby and autochrome for Hugo.

Yes, that's uh and so winding up here said too may seem obvious all of a lot of it is a very painstaking testing all

of the obvious bits assumptions making small improvements wherever we could. It

was to taking 20 years where we had to make our own probes our own test strips motorized test charts. We had to check the display hadn't drifted. The monitor

come uh calibration. We had to check people's eyes. We had to check uh and

people's eyes. We had to check uh and none of this existed as standard. And

what's happened after is hardly anybody calibrates monitor because they're lovely and stable. All little of this uh

all of this work uh survive. But uh

these are rebels that ended uh leave not a rack behind. What was it all for? You

might ask the answer is well we did it once and we understood it and if we have to do it again we can we can simulate

uh film processes we can uh simulate new displays we can say what could you do in a film that we can't see these days can we get bright highlights we understood

it all and there's a little bit I love to see us finish I liked us to switch over from 1931 XY Z to 2006 LNA this and

a few other things, but it's almost done. If we could just have one last

done. If we could just have one last push. Okay, I think I'll end it there.

push. Okay, I think I'll end it there.

And I >> Thank you, Richard.

>> Spot on.

So, um, thank you very much. I I Yeah, I always have a pleasure when when you bring those anecdotes. It's always uh fun to um to listen to. I need to get to

the beginning. Sorry. Uh okay, now I'm

the beginning. Sorry. Uh okay, now I'm spoiling everything because I'm not at the beginning. Stupid me. Okay. Um so we

the beginning. Stupid me. Okay. Um so we we heard um that um there's a a big body of um um knowledge in in true light and

and also we have modern workflows and it would be a shame if we could not translate some of that um into the modern uh world and this is what um um me and others were trying to do is like

how can we bring and and and uh um all of that knowledge into the mo into modern scene referred pipelines and um this is what I'm going to talk about um

a little bit. So um a recap on a tool we have developed a few years ago. It's

it's called Chromogen and maybe some of you have heard the presentations about it or even created some um looks uh with that tool and we had a promise where we

um that said okay we we have the film look and this is um it produces pleasing images and we can see by the just sheer amount of um print film emulations and

plugins that are coming out on a daily basis that this is is still a big thing is the aesthetics of film.

And but we we kind of said a few years ago, well, we have this chromogens thing and with this we can climb up a we can escape the local maximum of of of film.

Um yeah, with this tool. Um or in other words, we also, this is also a slide from a few years ago, we said like, oh, with the chromogen framework, we can actually match um um a lot of films uh

visually, but then extrapolate out of that. um yeah this is what we set out um

that. um yeah this is what we set out um a few years ago but where's the proof right so um people said like okay but show us how to do that and and this is

where um I consulted Richard and the true light library um to say like okay how can we do that and we have a lot of

um uh informations and and data um measurements from the whole senon to um the visual appearance in a cinema in in all sorts of flexible ways So we have um

different profiles that go as Richard said from senior to Satos M and to Satus A densities. So what I did is I I looked

A densities. So what I did is I I looked at um some of the profiles that are shipped with True Light and we have um

um everything's based on the RE laser recorder and um I picked four um intermediate um film stocks where we had calibrations for MF is stands for a a

Japanese um or Asian-based film manufacturer and K stands for another one. Um and so we had like four four

one. Um and so we had like four four different intermediate stocks and we had um yeah um a few print stocks. Um a is a

Belgian German older um film manufacturer. Um and yeah, so we had all

manufacturer. Um and yeah, so we had all of these different uh true light calibrations for printtos and we had also trans transmission spectra and projector calibration at as filmer as

Richard um mentioned that also would slightly change the appearance. So we

could I what I did is basically um oh here's again the slide Richard showed. I

can maybe skip over that. So what we did is um I did all of the allowed combinations of going through this um

camera u negative onto this uh print and then with this lamp simulation this give me 72 different combinations of uh film stocks and when I did this slide I was

really kind of um yeah um uh um finding out how all of these will look very different and then I looked at it and said like oh they look all very similar.

So I was a little bit disappointed that from the outside they look um all the same. And this is also what you find if

same. And this is also what you find if you really um um look at the at um measurement data they all look different but in a smaller extent than what L

packages nowadays suggest that when you apply one and the other it looks completely different and that's actually not how actual film worked. So there are subtle differences if we compare an F

film to an K film um uh print profile.

Yeah, sorry I'm legally not allowed to say the full word. Um

it's just a placeholder for um things.

So there are there are differences here and you can see that they are in in in a way not trivial. So they go in on the saturated colors in one direction versus

the neutral axis and they drift um along. So there are real differences. So

along. So there are real differences. So

of course what is missing if we if you want to apply um a filmic rendering to a digital camera is we need to also module a model the the camera negative and for this there are not so established

workflows. What typically happens is

workflows. What typically happens is that um one way to do that is you shoot a a a test chart or a certain scene with a digital camera and with a film camera

and then you develop it and you scan it and then then you have basically you can construct again a true light list transform which says oh this color should come out here this color should come out there this color should come

out here. um typically those um list

out here. um typically those um list transforms are uh more sparse than what we get on uh on the print side of things. So we we see sometimes that um

things. So we we see sometimes that um small little interpolation artifacts are intro artifacts are introduced by this this type of uh calibration but it should give us a good starting point. So

what we did is we added to that list of of um yeah a table of matrix um four different camera negative models like K5207

and a K520.

This is the daylight and tungsten balanced. And then also we put into the

balanced. And then also we put into the mix um um the the re film gamut um uh camera negative simulation that was a simple 3x3 matrix in log that should

mimic basically um a camera negative or should bring a digital camera into something like a um camera negative space. And then we had another one where

space. And then we had another one where we basically skipped that process. So in

case you have already camera negative that you can basically apply the rest.

So we came up with 288 at the end of the day 288 um transforms and now we had the task

can we match the um all of these 288 film looks with chromogen. That was the first question and then immediately the second question was how should we match

the 288 film looks? Should someone sit there like should I call Andy and say Andy do you have time for the next 12 months and try to manually um match um

all of these film try to match all of these film looks by eye and no this obviously will not going to fly. So what

we then did is we looked at machine learning because typically machine learning if you have a cost function where you can say like make this match this then typically a machine learning model can use back propagation to find a

solution and it turned out that by the design of chromogen being smooth and see too continuous and you don't have any kind of sharp edges. It's actually quite

uh friendly to machine learning and to back propagation because back propagation is basically you're chasing down gradients and if the tool you're trying to emulate is very well behaved

in terms of gradients it's it turns out that machine learning kind of like that.

So it was um an interesting experiment which turned out um pretty good. So we

had at the end of the day a machine learning model where we um we can put in a a lookup table and it would give you 50 80 120 parameters of chromogen which

then when you apply this and the true light cam color appearance model to get to let's say sRGB or P3 that will match basically the lookup table. So after the training um this is basically the

histogram of the losses. So we um this this is um a delta E in in our internal EAB color space that Richard designed.

Um so we we have um around 200 of these or even more than 200 um 250 of these looks are very very close that in a butterfly you would not see any

difference. So we're very pleased with

difference. So we're very pleased with that. And um and a few are a little bit

that. And um and a few are a little bit further out but I'm still tweaking. So

maybe I get this histogram more to the to the left. But we were really pleased with that. So in a way what we what we

with that. So in a way what we what we have done is we kind of allow from the film look basically to skip that hard

first part to climb up that chromogen hill and give give um us and and the colorista kind of a a shortcut. And from

there we can then explore um um how we can further modify the look because once it's in chromogen it's all parameterized and modifiable or if you come back to this thing what we have done is

basically we we added samples and we showed that we most of the samples we can um yeah match with with chromogen um

to a very satisfying way. Um and I and we hope that providing these as starting points now the the creatives can take that as a starting point instead of having an opaque lookup table which you

then cannot really modify or you need to grade through it um or do all sorts of other things you can actually um use human understandable parameters to basic

basically explore new looks. So let's

have a look. Uh so let me switch to base light.

So what we seeing here is um basically um uh a true light profile um which simulates a camera negative and the the whole print process and this is

basically what it uh looks like. And if

I go to um this fit of chromogen so what we have here is now um a process which together with the true light cam color appearance model it works also with ASUS

and other DRTs that's a nice thing because it's scene referred that is a scene referred edit that together with

the um the DRT you're using will produce something that looks very much um alike um the

the other one. So if you if you look at different images here, so this is again all done by formulas. This is uh done by um the the actual ground truth data. Um

we can see that we're getting um very close results here. There are some small differences. I will point out just a few

differences. I will point out just a few differences in a second. But in a typical butterfly, you will not see um a lot of differences. And in a way we

never we we we we we never really tested the printframe emulations themselves more than a butterfly. In a butterfly the the the the difference you're seeing

are several orders of magnitude higher than if you do an AB um uh comparison because on an AB comparison you're you can be fixated your eye can be fixated

on on a certain point and then you are moving the color differences spatial color differences to temporal color differences. So you you you actually um

differences. So you you you actually um you are way more sensitive in in an AB flip between um compared to an if you look around from that color to from this

spot to this spot, your eye is serving along the image and then forgets basically how um what you saw here and

um yeah, but if you do an AB um just to get an idea. So this is basically the the the the chromogen and this is the um

the the print film emulation. We see

there are some subtle differences but I believe that nobody would see this in a in a butterfly. The the differences come

from if I go for example here is um that the the the um the printfilm emulation and this is I I I track this down. This

is actually not caused by the print film emulation but by the camera negative simulation because it's it's based on a sparse set can produce some um yeah

artifacts um due to the larger interpolation. And if you look at the

interpolation. And if you look at the chromogen because chromogen cannot do um um uns smooth or broken edits. it it

will basically divert from that um um uh yeah from that um uh ground truth because it basically cannot destroy or cannot introduce artifacts. So in a way

that that these are differences which um we're pleased to see because it means that the the chromogen fit is more robust and smoother than um the lookup

table here or if if we look at this one here here additive mixtures you know I'm a big fan of that. Um so if you look at uh this is basically the L interpolation

and this is the chromogen you see that it um it's not it's not producing those um u break breakups here. So this is basically the main reason when when when uh things diverge from each other and we

thought this that's a good idea and this is um there a it's cool and we could also add this basically um as starting points to chromogen. So what we are planning to do is to give you a few of

those options as a as a tool. So where

you can um choose a a camera negative and a recorder stock and you see by just uh changing the inter internet inter in

some for some combinations there are quite some differences here and then you can pick the print. So you can either

take an ACP30 or a 3590 or a K2393 whatever that is I don't know and and then you can choose also the different uh lamp spectral simulation as Richard

so we could can take the generic um or the based on the kenotone um I guess it was kenotone and uh the the real Kodak um spectral um

uh um transmissions. So and and but the beauty

transmissions. So and and but the beauty is now first of all this works in a scene referred workflow and you get um a stack of operations that have meaningful

labels and um and that you can further tweak to your liking. So it's not basically just applying something and and and and and blindly using it just because it's film but having something

as an inspiration and then diverge from that. Um we also have if you're shooting

that. Um we also have if you're shooting on film what we have is basically this um the senium printing density which takes away the camera negative simulation because you already have a

camera negative so you don't need to simulate it. And now when we jump to the

simulate it. And now when we jump to the Mari this is now the Mari how it actually should look like according to the whole process. Finally see her again

in the right colors. Um yeah, another thing which we have done over the time is that colorists send us basically um lookup tables where they say like oh

this is a lookup table I like um and can you do the same thing instead of mimicking film can you mimic my lookup table and um and and also this works

quite well um in in many cases um so for example here's here's the lookup table that was originally logged to Rex 709 and now what we've done here is basically a fit and it's not now in this

case it's not only fitting chromogen, but we also added to the mix a base grade and a CDL grade. So it's so now we can see that the um what the look does um basically so and and you can adjust

um that um as well if you want to take out the tint and will show you a little bit about that. But that was also very pleasing to see that it generalizes um um to um also looks which are not

strictly coming from a from a um very well um um calibrated uh process. So, we

hope that that allows you to take all of your references, your lookup tables you you you like to use and translate it into a modern world and then be able to

tweak them um directly because it's much better to to tweak um the appearance of um of a um a certain um process directly

with um um other than or compared to if you're adding grading on top of it and you're going through a lot which might might not do basically the the right thing. So you now you have really fine

thing. So you now you have really fine control over um the appearance. We also

get um um pretty broke I would say broken lots and sometimes you you you apply a lot and you see like oh my god what is this? Um but the DP says this is my lot and I want to use it and you like

okay so and you know you will have a hard time with with that grade. Um and

we we sent also that through the same process. And here what we see is it it

process. And here what we see is it it does not do it. It's it it really doesn't like producing artifacts. So it

will diverge basically um from those extreme drops in in exposure which basically tears the image apart. Um for

example, if we go um to this one here.

So you know those lots where basically the reds never go up a certain value.

And with the chromogen fit um you you you can get something which is smooth and which something in again in a in a butterfly it's not exactly the same uh

um of course but I think if you show this at DP and say like it it gets the essence of of your look and you can further tweak it. Um but um yeah this is

is is also works for pretty extreme broken lots pretty well and we really excited that this exercise turns out to

be quite useful for our users. All right

where did I go here?

Uh then a few other words just as a we all know that um that the image formation is not only

about color, right? Um the the texture adding adding um film grain on top of digital images is a very um trending topic as we can see in this quazillion

of plugins that do that. um that

imitating basically film and and what what we are not so much interested in actually matching exactly film in terms of that aspect. The same way we were not really interested of matching film with

chromogen. We were not really about to

chromogen. We were not really about to imitate film but understand what we like and why we like it and then come up with a frame of work that models that. And I

think um the same can be done with uh uh film grain. Um instead of just mimicking

film grain. Um instead of just mimicking film, we need to ask why do we like images with visible film grain? Um why

do we like the randomness introduced by the photographic process and can we treat film again as an example for more general framework of picture formation?

And then I think we'll come up with and this is not done. So it's it's basically um something that um we are at the beginning and then we um maybe come up

with a framework that um that um is is is very interesting. Um one example for um is um today's print film emulations

um as also Richard explained is um are um having a very strong um shadow toe um meaning that they lift the shadows a lot um and I think this is that that

correlates with actually not with what film did on the in the color domain but what film did in the um in the um in the in the spatial domain with noise.

Because if we take a an exposure step series and we add some noise then what's happening is actually that on average by adding some noise especially in the shadows on average we

are raising um the picture level but if we do this with grain we should not do this with the printf emulation otherwise we we apply it twice right so this is there there are a lot of those examples where I think what we're doing now with

with printf emulation lookup tables actually belong into the um domain of noise And we've done some experiments and we we come to um um already some um

prototypes which look very convincing or very um pleasing. And yeah and I want to end that talk with the past is a place of reference not a place of residence.

The past is a place of learning not a place of living. And yeah so we um stay tuned. We I hope we get some progress

tuned. We I hope we get some progress also there in the in the future. And now

handing over to Andy with some finally practical um examples.

And where the power and and while you do that, I'm going to wind up the film strips. It took like literally an hour to get the film strips through the whole

>> process and still not there, but I'm I'm going over there. Andy, it's your >> Yeah. Thank you. Yeah. So at least now

>> Yeah. Thank you. Yeah. So at least now you not only used the print film emulation but you had one in your actual fingers. That's um I guess one to thing

fingers. That's um I guess one to thing to cross off your bucket list. Um

ah okay seems like the that thing has stopped working.

Ah here we are. Uh yeah so so I uh also I want to leave some room for questions at the end to all of us uh three speakers. So I will not take too much uh

speakers. So I will not take too much uh time I hope. um to show something. So

what what I wanted to show here is now not really going into that exercise of uh yeah basically accurately reproducing um film but more of the exercise. Okay,

we have a lookup table that we want or that someone wants to use on a project which I have here um applied to a to a few shots and we want to then basically

f first release that lot from the let's say recx 709 constraints in its output and uh also from the basic and then we

also want to modify it um uh further. So

basically I already sent that lot through our um Chromogen ML training and and the L I mean I don't know how it was

built. It was someone gave it to me to

built. It was someone gave it to me to um to check it out but I am pretty sure it contains some kind of print emulation just from the look um that we're

getting. And so now I applied uh

getting. And so now I applied uh basically the the chromogen ML fit um to this one. And um I forgot to mark this

this one. And um I forgot to mark this one. And so so now what we have is we

one. And so so now what we have is we have that uh uh base grade CDL and chromogen. So we can see that um and and

chromogen. So we can see that um and and the reason we added these as optional um parameters or tools to the chromogen ML fitting is that chromogen itself is not

capable of doing an simple exposure shift. and and a lot of lookup tables

shift. and and a lot of lookup tables they contain an exposure shift of I don't know half a stop or a stop up or down and um and without that uh base grade we noticed that chromogen often

would waste a lot of stages and parameters just trying to mimic a simple um exposure shift when um this was the much easier solution and additionally we

added um a CDL grade for the optional tools that you don't often you don't need it but sometimes uh it's also really helping a lot because often existing lookup tables floating around

in the world are just a print film emulation and then someone did some let's say lift gamma gain upstream and or downstream of it. So if uh so we also noticed that for some lookup tables

where the model without a CDL struggled or sometimes was not accurate enough or um yeah let's say wasted a lot of chromogen parameters

to do simpler things. uh this uh really helped a lot and then we have basically the last stage is then the chromogen one or maybe let's go on this shot so we can

basically see here our base grade then the CDL grade which in that case also does a little bit um uh contrast and here basically we used CDL because you

can then also present it in a in a film grade style um set of parameters in baseline or in a video grade style and then you could all and now we can also see for example here it adds some warmth

in the highlights and also in the shadows a little bit but then a strong coldish tint here for example more in the um yeah in in the gamma whatever that is in a lock space. So basically we

are applying it here in um in a scene referred uh lock space and then the most interesting part is then basically the chromogen here at the bottom where the

let's say the funky color cross talk and all the interesting effects are happening. I have to say that here I

happening. I have to say that here I noticed in the projection I wanted to go a little bit deeper on um working precisely on the skin tones. I noticed

that it's um yeah uh not really showing them as nuanced as um I see it here on my screen. So I hope you but you I hope

my screen. So I hope you but you I hope you still get an good idea of that. So

in the end, if we don't uh want, for example, the exposure correction and uh tinting of the lot, we could just basically turn the the base grade and

the CDL off and then just take basically the more complex look essence of that thing as a starting point. Or we could uh basically replace it with a if that

is now too warm for us, we could just replace it like with a basic um base grade uh tinting and then we would also get uh probably now bypass all Yeah. to

a nice uh starting point. But yeah, but maybe let's stay with those for now. Or

another thing is of course we have them in separate layers. So we could also basically fade uh fade the opacity down for those or we could say I want to keep

the basically the contrast component of the CDL but reset basically the color tinting. Then we could just here reset

tinting. Then we could just here reset the the three track balls and so so you see there are there are endless creative possibilities and and the important thing is it's in a let's say human

understandable way or at least colorist understandable way. I mean every

understandable way. I mean every colorist I guess has an understanding of lift lift gamma gain or if you're a more film style colorist like me then you

like exposure and contrast u for example which uh can replicate the same uh things. Okay, so now let's um step into

things. Okay, so now let's um step into the chromogen. Um and also for me

the chromogen. Um and also for me personally having that thing available was also highly um informational because you could just take a lookup table and

that you maybe have used over the years and really started to like and finally you could go in and then examine it when you have some uh spare time and then

just see okay what does it actually do or how I mean and the chromog ML does not show you how it was built u but you could say this is how you could build

something that gives um a very similar response and u and because I know the chromogen tools pretty well is it's then basically a nice way of explaining me

basically okay what does this look um how uh how is it made and how what does what are the key um components of that so so for this one here for example I

noticed that um so that the the the bleaching of the blue color is done here in a separate way so basically we have here one ble highlight bleaching stage Um I guess here it's maybe you can

barely see it there on the uh on the background and basically so this one is doing only the blue um this is only doing the blue color area and then there's another bleach uh I remember

here at the end where we're then having oh actually it's uh it's also the blue um again here but but this one I noticed is doing mostly the um if you look at the car

lights so this one is doing mostly the um uh warm colors red and yellow um for example. And so this is also already the

example. And so this is also already the first thing that we could then start modifying. So if we think, oh yeah,

modifying. So if we think, oh yeah, actually I like maybe having a little bit of more color here in the in these um bright uh yellows and reds, we could

then go in and for example slightly reduce the the bleaching in the in the warm colors.

And we should then uh with every look we should then also check if it's not doing anything maybe weird looking in for in this image here with the bright uh window in the background but I would say

looks okay.

And then uh what else can we see can we do here? I see here for example there's

do here? I see here for example there's the the red shadows brighter thing. So

we can see here it takes the it takes the the darker reds and makes them brighter.

And at least here on my screen, I think to my taste, these reds are a little bit too bright uh to me um in that instance.

So I would say maybe let's uh let let's reduce that or we could do do them even darker or we could just also make the whole thing a little bit less complex and uh delete the whole stage or just

keep it um bypassed for example.

And what else do we have here? So here

we have something that um yeah basically brings in more color saturation uh color variation not saturation variation basically it stretches the color

variations hue wise in the for the yellow so green to orang-ish uh range and

what else is there um ah yeah actually there were these here for example the cyan colors because we have that thing.

Yeah, I guess here we have it a little bit more prominent uh in the frame. So

there is for example something that makes the cyans a little bit darker or also there's something that desaturates the the highlights in the cyans. So I

hope you can see these subtle um differences in the projection. And so if we want to keep there a little bit more color, we could go in and say, okay, let's not desatur saturate it that much

and let's uh dial that one down um a little bit, for example. And

yeah, so and I think also that having that red uh thing off is probably helping um a lot. Another thing we could add is maybe that especially here for

that uh uh dark scene with the with the city is maybe adding a little bit of um tinting to the shadows. So I can also add of course a new chromogen stages and

how they work individually. I explained

u I guess two years ago here on um that stage and it's also available as a presentation um on YouTube. It's called

chromogen in action. And yeah, and good thing is that now I can add the tinting, for example, before I do a contrast adjustment or before I do the bleaching.

I mean, for the shadows, it's not the bleaching is not so relevant because it always goes to the highlights. But

especially doing tinting in the highlights or doing work in the highlights, it often is making a huge difference if you do it for example upstream of the of the bleaching process

or of uh also of tinting processes.

Basically, if you add if there are tints here in the chromogen stack, then it can then you should sometimes experiment with building an intuition if your edit

should be done upstream of that uh stage or downstream. I would say tint the

or downstream. I would say tint the neutral tints and the um bleach stages are the uh there are some major um uh changes often uh in the response of the

tools. Uh so what would ah okay we

tools. Uh so what would ah okay we wanted to add basically some uh more coldish color here to the shadows

and maybe let's fine tweak the the hue to make it a little bit more bluish. I'm

not sure how well you can see that with the flare and the projection. That's why

I might overdo it a little bit.

>> Yeah. Or maybe we we can dim down the lights in the room please.

Ah yeah, >> thank you very much.

>> So ah also now because I'm not having that stage lights hitting my eyes completely.

I see now also more details on my monitor here. Um so yeah so this is

monitor here. Um so yeah so this is before the neutral tint and this is after. And maybe I also show you again

after. And maybe I also show you again that thing with the red because I thought that was also let's go here. Ah

actually I already uh reset that parameter but I guess it was quite high.

So basically, so before it was like this, uh it was like this and we we just took it out and I think that also helps a lot. Yeah. But now for example with

a lot. Yeah. But now for example with that um with that neutral tint and also we should always um uh label our added stages. So I label

that uh cold shadows. And also I should point out that the existing names in that chromogen were also built by the chromogen ML. So basically um so so what

chromogen ML. So basically um so so what um what they did is they added like a basic naming scheme that the chromogen ML adds meaningful labels to all of the

stages that it adds to make it easier for us to find things. So now especially in these dark shots here we're losing a lot of the of the skin color. So this is

before this is after. So maybe we can do something about that. And in that case, I would also now inject the stage recovering the skin tones upstream of

the blue tint because we want to have something to we want to have more values uh to work with before we push them towards uh blue. So I do for example a

sector saturation and then I'm moving it upstream of the cold shadows. I can pick uh for example that skin hue here and um

dial it up. And

I mean this is way too much for me, but maybe uh if I remember the response. Oh

yeah, it's still quite pale on the projector. So hopefully that's looking

projector. So hopefully that's looking uh good for you or at least better. And

but I would never leave a stage especially saturation like that because we can see that when we add the saturation we can see it in this area here is that it's basically doing the

the most work here on the saturated colors if we add it to all of the let's say um skin hues and and this can look nice on a picture maybe like this where we don't have very

saturated colors but if we suddenly have a saturated orange in the frame that would be pushed much more further out and probably too uh too uh too much. And

so whenever I do something like that, I always dial in the the chroma slider, which then focuses our tweak only on the less saturated colors or on

the colors closer to gray. That's why

that slider here goes to gray towards the right and to the more saturated tonesuh on the left. And so now if uh so now if you watch that um point cloud

here, we can see that. And now our added saturation still has a strong effect on the skin, but it's um leaving the saturated colors uh touching them not

not so much with a smooth rolloff. And

and additionally, I might focus that here on onto the shadows because I know that we added the blue only to the shadows and I don't want all the skin tones maybe to get warmer. So I can say, okay, focus that also here more on the

shadows. So, it has quite an effect on

shadows. So, it has quite an effect on this shot, but I would say on on this shot here. Oh, no. Actually, also it

shot here. Oh, no. Actually, also it still has quite an effect. So, probably

we need to drop the the pivot so that we're focusing it even more onto the darker um shadows. So, now it still has

an effect, but um much less.

So, uh before after and so maybe let's call it dark skin saturation. Also just

as a general principle working with chromogen you you should never expect to have zero effect of such a tweak on also on other areas because that's one of the strengths of chromogen as Daniel

described everything is very smooth and it cannot break create these sharp cusps but that also means that we have to accept that there are always these collateral effects of doing a small

tweak here it might have a small effect on other areas but that's actually a good thing and helping us. It's just

that sometimes if it does too much in that area then maybe add another uh tweak um to uh work work on that. So uh

yeah I think I also I want to come to end to leave at least 10 to 15 minutes for questions and yeah I mean an additional thing that I think is also very helpful is that we have a that

basically the whole contrast of the look is also controllable with the contrast stage. So if we want it to be a little

stage. So if we want it to be a little bit more contrasty, uh we can do that here by raising the contrast value. And if we want to have

contrast value. And if we want to have the highlights, let's say a little bit brighter, we can also here play with the highlight pivot, for example, to get there a little bit more strength in. And

maybe the last thing I want to do is to um yeah change the um change the color of the skin tones or um basically squash

them a little bit to one more similar hue. And at least on my monitor they

hue. And at least on my monitor they look a little bit too uh reddish.

And I think I might uh do that here more upstream in the stack. And I can tell you the more you work with Chromogen, the more you will build an an intuition

about u uh where these uh stages might have the the best effect of sometimes it's also just helping building something and then when you have a grading surface connected um you can

then basically sweep each stage quickly with two buttons up and down through the chromogen stack. And then this is also

chromogen stack. And then this is also what helped me a lot. And and yeah, and especially if we want to basically modify the uh the skin tone variation, then we

should also only do that in a less saturated uh color. So I should move that chroma slider over here. And so

let's label that skin squash.

Okay. And yeah, I mean I wanted to do a few more tweaks, but I guess you got the idea and that it's a really helpful tool. So that's basically now before um

tool. So that's basically now before um after of the of the whole look. And I

mean also the beauty of that is from a technical point of view that also makes the inner nerd of me always smile is that now these images are not going through any kind of lookup table. Um so

no lot interpolation happening anywhere.

These are all now shader based formula based operations from the basically decoding of the camera image to the um final display rendering of the DRT. So

all um basically high precision um 14point shaders.

So um yeah and then I think we can see that we can get a filmic um appearance of the images easily with that and also can tweak it uh also in a very powerful

way to our creative needs. So yeah,

so thank you.

>> So are there any are there any questions >> for either Richard or Andy or me? Yeah,

there's one. I'm not sure if we have microphones. Yeah. Yep. There's

microphones. Yeah. Yep. There's

So I think over there this one. Okay. Here

we are.

>> Try it again.

>> There we go.

>> Right. I was wondering whether chromogen ML would be available for all of us to use and plot our lots in and try to get them cleaner

if if this will be available for uh for all of us. Yes. Yes. Of course that's the point. Yeah. Yeah. Um yeah we we

the point. Yeah. Yeah. Um yeah we we want to make that process of you know like having a lookup table turning it into chromogen as a function in baseline. Yeah.

baseline. Yeah.

>> Yeah. But but with all of us we mean baseline users we have to say. Yeah. and

and >> yeah, >> no plugin for resolve.

>> Yeah, I I thought that was obvious, but thanks for mentioning.

>> Yeah, and and also one additional thing also, if you need it right now and if you are a Baseline 7 beta user, then just email um us or Daniela especially and we can help you.

>> We're doing nothing else at the moment and converting lots for users.

Yeah. Any other questions?

Douglas has one.

>> Uh, thank you for the presentation. Um,

I'm curious to know what happens if you want to to uh so you have your stack in Chrome Agenda replicates your Rex 79 L

for example. if you want to um make an

for example. if you want to um make an HDR version that perhaps has a bit more color uh colorfulness. Yeah.

>> And exploits a little bit more the the the wider gamut of E3 for example. Um

how does that how does that work? And uh

yeah.

>> Yeah, that that's a very good question.

the um if you have a lockon 9 L then you will probably have um hard clippings in there but the the resulting chromogen will try to fit it but approach the the

gamut limits in a in a smooth fashion.

So you will get colors outside of Rex 709 that is then for the Rex 709 version is clipped by the DRT or to by the DRT but you have also more colors um out

there. Um, so that comes naturally that

there. Um, so that comes naturally that you have HDR um highlights or uh um uh or um more saturated colors in there.

And because it's tweakable, you can also uh further modify it. So you can say like this is my my my a transfer of of my lookup table, but then I want to explore a more saturated world. And you

can do that. Yeah. But it works kind of naturally that um that it it's ex it's it's gently extrapolating um outside of of the RE7 mode. Yeah,

>> thank you.

>> Good question.

Any other question? Yeah, there's

another one over over there.

>> Hi. Um is the workflow with the chromma jet and all these different layers you can add and add and add at some point destructive or you can just go on and on and on and um explore more.

>> Yeah. So the so if you have a smooth transformation and then you add another trans smooth transformation then typically you come up with a smooth transformation um out of that but if you have in that

process one where there's a cusp then you have a cusp in the final output. So

that's why um you can add quite a lot of stages um and as long as you're not doing exactly the same thing twice and twice exactly at the same point um then

you you can get quite far. So we have not found a way limit where it's all um or it starts to break because every tool

um keeps a gradient control and um yeah so you can add a lot of stages basically. Yeah,

basically. Yeah, >> thanks.

>> But also I have to say that if the source data is for example underexposed and has where has a lot of color noise then you it's possible to amplify that

existing noise in the footage. So and it can happen that if you have some so basically dialed everything to 11 with chromogen that you will see some let's say nasty images but this are then

usually amplifying errors in the source images. Yeah,

images. Yeah, >> would it make sense to maybe achieve a look and then sometimes um migrating maybe in Da Vinci with different nodes and you add nodes and notes and notes and at some point

>> you may revisit um or redo the look um because then you know what you want to do and then redo the look with like less nodes and a clean image. That's why I was thinking about this.

>> Yeah. So actually when you do the fitting you you you you have a parameter that says how many um stages a chromogen is allowed to use. So you can basically

say like only use 10 stages and then it might not have the expressiveness to to model exactly every nuance that your lot

um does but it will find the optimal 10 um let's say 10 stages to come as close as possible. That's a that's a beauty

as possible. That's a that's a beauty about back propagation that it figures it all out by itself. So you can have something where and in in fact this is why we added this as a parameter is that

you can have a less if if the if the goal is to have something that you can further tweak and understand sometimes it's better to have something that doesn't match 100% the the the it goes

in the same direction but is simpler.

Yeah. Yeah. Yeah. We agree that's very clever really.

>> Yeah. And then this is also exactly what I did last week. Basically, I took a lot and then added a lot of things. But I

was also very careless while because I already knew that at the end once I'm happy with that look, I will then basically refit it through Chromogen ML and then have my compact final thing. So

I was very basically very expressive during the lookdev sessions because I did not always have to keep in mind, oh my god, I'm going to apply this to all the shots. Um I just knew okay let's do

the shots. Um I just knew okay let's do yeah play wildly on the canvas and once I'm happy then let's let let chromogen fit it through and then I have a good set of parameters to then tweak on the

project you know in a creative session sometimes you the cinematographer run goes out of the suite and you look at your stack like oh my god what did I do

oh and then you basically clean up so it's it um yeah it it can be used in that way there's a another question from mika maybe pass on microphone to Yeah,

perfect.

>> Just a quick followup, Andy, are you saying um that you might use Chromogen and ML to analyze a lot? Uh maybe that ends up with 40 layers. You tweak,

tweak, tweak, and then you push that through Chromogen ML again to make it six or 10 layers.

>> Uh yeah. Uh maybe then still 20 uh stages or so. or or basically I combined it also with other grading uh tools maybe um yeah some little bit of curve grad do that a second time then

>> yeah exactly yeah but he's crazy >> that's that's wonderful actually because yeah I like that that concept that's great >> since since the microphone is right next

to me did you title this session correctly is it really from printto or is it also including the OCN

>> yeah yeah you're Right as always. Yeah,

>> well spotted Charles. Yeah,

>> since the microphone has been next to me, when will we see texture gen or spatial gen or whatever the equivalent to analyzing all the texture?

>> Yeah, >> spatial domain and >> picking up the hints.

>> Ah, yeah, Durk always picks up the hints. Yeah, I I

hints. Yeah, I I I cannot tell you that it's it's it's it's I mean the chromogen journey was a long journey and I don't know it but

it's much more interesting to to tackle that that um subject in in in that way conceptually and then to do another you

know like slap film grain onto a a 18% scan onto a negative that's not really interesting from from a learning point of view and I think also it's not

expressive if because you cannot mutate that very much. So it's it's um yeah I mean chromogen was written three times so uh the first two times didn't work

and yeah uh and I'm grateful that at film that we uh we we get the time to let those ideas develop even if it's not not ready yet and you can devote some of

your time to explore that. It's it's

like boiling a um a tomato sauce for your spaghetti. takes time to boil and

your spaghetti. takes time to boil and and then you don't think about it for two months because you're doing something else and then you come back to it and um and having that uh possibility

to to do that is uh really special at FEM light so we can yeah so I don't know it it's a journey um

>> um is this tech that you are could this be applied to shot matching >> I know shot matching works differently uh for shot matching we That's what we we showed at the semi-conference. It's

it it works differently because it for the for shot matching we use computer vision to make to construct a scene graph where we then say like okay that

yellow array of pixel in that shot is actually the same yellow pixels in the other shot. So we construct basically a

other shot. So we construct basically a dependency graph and then from there we we derive what is what would be needed in on average to match the shots together. So it's a different different

together. So it's a different different architecture.

Yeah. But is this something that's in the works? Is this a potential thing

the works? Is this a potential thing that we may see?

>> Yeah. Yeah. So, so we we showed um short matching um at at the SMT conference in LA 3 weeks ago. So um this is something which is in early testing at the moment.

Yeah. So we'll come in come at at some point in the future.

>> Thanks.

>> Just just for the thing at at what point will the colorist be obsolete?

>> Uh I don't think at any point. Um what

we what we're doing is we we're using computer vision and machine learning to um to empower the the the artist not to

u make her or him obsolete. I think

that's not the point because um yeah >> like the workflow will be like much faster with all these tools. Oh, it will will be different and and and hopefully

more efficient and also um give room for more creative or time for creatively spent because you not need to do the yeah the the work that adds friction to

it. Um um yeah

it. Um um yeah >> yeah I think no I guess I never enjoyed doing white balance and exposure matching of I don't know 2,000 shots on a feature film. So I mean it had to be

done but if that would already be done by some let's say ML assistant then I would I guess be quite happy to spend the gain time on really shaping um

shaping the images.

There's another question um to the far my left.

>> Thank you so much. I just wanted uh to do a quick question about grain because you you started speaking about how texture and grain can be interpolated in

the in the files. But how how how do you see um how can it how does this software integrates uh grain in the in the shots?

Like is it something that is simulated on different levels like on on dark tones and and middle gray and highlights or is it something that works since it

has a organic way of uh approaching the file? How does it work within the

file? How does it work within the software?

>> Yeah. Um so um we have a a grain simulation tool which is also used a lot in in productions um that where you can apply grain differently to the shadows and the and the highlights. So it

correlates a little bit with the image um um and you can use several ways to shape the MTF of of the um of the um of

the grain itself. But the slides I was referring to is uh is a is a more fundamental concept of or more a wish to more fundamentally understand that

process um uh and push it to the next level. So I would say we have a standard

level. So I would say we have a standard grain tool in in baseline which I think Richard wrote a few years ago um and and it does the job but um yeah the question

really is why do we apply grain um to digital images? That is actually the

digital images? That is actually the question I'm I and oh why does the industry seem to like that so much? Um

that that's actually the interesting question.

>> For me what is really interesting and where where my eye sees the difference is especially in rolloff like when when you get from very bright parts of the

image to very dark parts. uh grain

really helps blend this transition and make the file more organic or look like looking better. Just to say it in a bad

looking better. Just to say it in a bad way but >> No, no, I understand. Yeah, that's um >> and that's what really um I'm what I try to look for when I'm applying grain is

not having something that you slap on the file and it looks better, but it's something that really helps you feel in a different way parts of the image that can blend together. So that that was my

question maybe.

>> Yeah. Yeah. And we what we're trying to do is to to translate that description into a model or a framework. Um but we haven't done that. Um so I would say the

grain in in Basel is the same a similar approach than what you find in any of these OFX plugins basically. Um yeah

>> thank you. A possible related thing is we looked at doing dithering for OLED displays because the black tones were

different from pixel to pixel. But if

you put in a very low-level luminance dithering signal on there, then this could actually make the tone curve look a lot smoother because we don't see

flicker in shadows. So we can do a four frame or an eight frame cycle and nobody sees it and that gives you a smoother response

down the bottom end. The trouble with that is then if you stop on a single frame we have to keep doing this for the display and that didn't fit in with base life as it was one day we will come back to that.

>> Yeah.

>> Thank you.

>> Yeah. And and also one very pragmatic um answer to why do we like grain is it's uh um it hides artifacts nicely. Um I

guess many of us colorists know that sometimes it's just nice throwing it on and then it hides some of the mushy things that we don't want to see.

>> Yeah.

>> There's another question.

>> Yeah. Um, could you tell us something about the design process of Chromogen as far as how did you end up with this specific set of tools as it seems you

could have ended up with half or double the amount, but how did you end up with specifically these tools and stopped there?

>> You mean about the the actual tools of Chromogen?

>> Yes.

>> Well, that's a good question. Um I think I I've I've talked a lot about the philosophy behind that and um you you you pick a very simple concept and

then you the concept was like trying to do as I learned from from Richard. I

mean, the one thing I really learned from Richard was um the beauty in in in simple solutions and um and and all of

the different chromogen stages are almost trivially simple in in terms of what they actually do in the right domain and they only utilize a few kind

of concepts like inhibition and cross talk and um and I didn't want to do anything else and then that's basically where naturally

ended up with um it's it's it's only inhibition and and cross talk basically every stage is exactly that um in different in different domains and then it naturally lands there and it's 10

stages and we have 10 fingers on two hands and we have a so that's that's probably maybe we we could have and we had like different we had different stages but we we we removed also stages

because it's like okay that it's actually not to the the philosophy of the other state. So, so you can in on a computer you can do whatever you like, right? So, you can you can have one

right? So, you can you can have one slider which does one thing and another slider which does in a very different color space, a very different thing and you just mush them all together. But

ethnically, from a from an image ethical image processing ethnical point of view, if if there's such a thing, this is not how we how we do this. And and at Finland, we only want to say like, okay,

this is a a a model or a framework or a concept and we do that and we don't change domains within a tool and we don't do um just because someone wants a slider that does this next to a slider that does this. But if

it doesn't make sense to put them together, we don't put them together. So

you had the goal of of mimicking film and just try to make the smallest set of simplest tools to get >> not only film, not only film um um yeah

to try to understand what what makes a pleasing color rendering at that and and use film as an example.

>> Thank you. Yeah,

>> part of the thing in true light we measured less patches partly because it took a long time to measure them. But

the fewer patches you got, the more certain you were that the uh transform that you fitted between them was going to be smooth.

>> You wouldn't have two very nearly similar values. We also had something

similar values. We also had something would prune out members of a list which uh became dissimilar. So if you inverted a list and something took you from two

very different colors to almost the same color then the inverse transform gets unstable. So you have to go and trim

unstable. So you have to go and trim that back. And there were also

that back. And there were also parameters within true light that affected the fit so that you could say um have something that goes near all the

points but not exactly through but was always smooth. We never in the end we

always smooth. We never in the end we end up leaving those default values all the time. It probably took me eight year

the time. It probably took me eight year to find that switch until he utils.

>> Yeah.

>> Yeah. There a lot of hidden parameters, but the defaults are pretty good.

>> Yeah.

>> Yeah. So, one more question for >> then we need to wrap up and maybe it's the last question. Yeah.

>> So, if you have a say a big project, would you try and find one stack that kind of works for the entire project or would you have like a big number of different stacks or like how would you approach that?

Yeah, I think for my philosophy, I always want to have um the whole project goes through basically have the same I I usually call them like the rules of my color world applied so that every image

goes through the same thing. So I'm not a person who likes to do that night, exterior, day, interior. Basically these

five different kinds of lots. I mean

sometimes you're doing it when you have different time periods or you have some basically different things that yeah should really be different worlds with different rules applied to them it can

make sense but I would say for a normal story I usually have like to have one but then what I um also realized is once you do a project and if you're lucky as

a final colorist you're also being able to do the dailies then you usually during the shooting and during the evolution of a project you realize What we did in the initial look test

with some some costume tests and makeup test images was not as refined as we should maybe do it now that we see the actual scenes and then finally in the

final grading when we see the images cut together in the correct um order. And so

I think being able to tweak the thing um is definitely um a big plus. And also I think that or what I realized is uh having uh the chromogen available like

that that yeah scene by scene not shot by shot by scene by scene I sometimes go in and uh do small adjustments but I would never throw it out completely and throw in another thing. So I would go in

and do some tweaks maybe turn something off or add a small uh tweak to it because it just resonates better with a specific um scene. Yeah.

>> Thank you.

So, thank you all and um yeah, thanks Richard.

Thanks Andy. Yeah, thanks.

Loading...

Loading video analysis...