Diffuse reflection is used to describe reflections where the direction of the reflected light is completely independent of the direction of the incoming light. Plaster or plain white paper usually behaves very much like a diffuse reflector. A rough explanation is that as the light enters the material it bounces around so much that when it leaves the material the direction it leaves is totally independent of the direction the light entered the material.

Specular reflection is in many ways the exact opposite of diffuse reflection. A mirror is the prime example of specular reflection. Light that hits the material is reflected back at the same angle with respect to the surface normal. Note that with specular reflection the light does not penetrate the surface, unlike diffuse reflection.

Most materials reflect light in a much more complicated way than what you get by mixing ideal diffuse and specular reflections. A very common way to try to reproduce the more complicated behaviors observed is to use a so-called microfacet model.

The idea is fairly simple: instead of assuming the surface is perfectly flat, you assume it consists of lots of small facets. One often assumes that the material to look the same under rotation around the surface normal (isotropic). This allows one to consider the 2D case which makes life easier. The surface then looks like it has lots of tiny V-like grooves, where the edge of a groove is called a facet.

The key point is that the facets are assumed to be so small that they are not directly visible. The situation is similar to how the surface of an orange appears smooth when viewed from afar. Assuming each facet is either an ideal diffuse or ideal specular reflector one can calculate how the surface appears from afar. Depending on the accuracy of these calculations one can take into account effects such as inter-reflection between the facets and shadowing, which can play an important role for rough surfaces.

The roughness of the surface is usually specified as the probability distribution of the specific angle of any given V-groove. Frequently a Gaussian distribution is assumed, such that most angles fall around the specified with some grooves having a much steeper/shallower angle than the average. However it is also common to use a more intuative “zero to one” scale, where for example zero means perfectly smooth and one means as rough as it gets.

While these microfacet-based models are fairly general and can be used to model a wide range of different materials, they have some limitations. For example one typically assumes that the facets always form grooves. To be more specific, the normal of each facet will never point into the material.

The “matte” material models diffuse reflection only. The diffuse color parameter indicates the color of the light that gets reflected. Any light that is not reflected is taken to be absorbed. It uses one of two models depending on whether the surface is rough or not.

For perfectly smooth surfaces the Lambertian reflectance model is used. This model assumes that the light is reflected equally in all directions, and that the object is thick enough that no light is transmitted through the object. It also assumes that the light exits the material very close to the point of entry, such that the error of pretending the light exits *exactly* at the point of entry is small.

The Lambertian model does not consider the roughness of the surface. This makes it inaccurate for many materials such as clay, concrete, sand and cloth, where a plain Lambertian material would appear too dark at grazing angles. The Oren-Nayar reflectance model was developed to correct this. It is a microfacet-based model, where each facet is assumed to be a Lambertian reflector. The model takes into account the interplay between neighboring facets, such as shadowing and interreflection. This model is used for rough surfaces.

The sigma parameter (\(\sigma\)) determines how rough the surface appears, with zero being perfectly smooth. Technically it’s the standard deviation of the angle of the grooves. The maximum value is 90 degrees, however very rough materials will have a sigma of no more than 30 degrees in the real world. The rougher the surface the more light is reflected back at grazing angles, giving it a flatter look.

The glass material models a smooth dielectric such as glass or clear plastic. Mostly you can think of any insulator (non-conducting material) as a dielectric. Notably dielectrics transmit all the light when the light hits the material “head on” (normal incidence) and reflects all the light at grazing angles. The index of refraction (IOR) dictates how the material reflects and refracts light between these extremes. The index of refraction also determines how much the light will bend when entering the material. Mathematically the amount of light reflected and transmitted by a dielectric is described by the Fresnel equations, and how the light bends is described by Snell’s law. Both are used internally by LuxRender.

LuxRender has two glass materials, named “glass” and “glass2”. The latter is the more correct one from a physical point of view, and it’s controlled via the IOR and the absorption of the material. The “glass” material has, in addition to the IOR parameter, a specular color and a transmission color. These two parameters simply modulate (filter) the color that is reflected or transmitted.

Given that the reflection off a dielectric is a surface effect the reflected light isn’t actually affected (colored) by the dielectric at all. Also the transmission color does not take the transmission depth into account and as such does not realistically model absorption. Thus if you can you should try to use the “glass2” material.

In some materials the index of refraction varies significantly with the wavelength (color) of the light. This causes dispersion, also known as nice rainbows of color. Since dispersion makes a scene converge a lot slower you have to enable dispersion explicitly for each glass material. In addition you need to tell LuxRender how the the index of refraction varies with wavelength. There are several different ways to do this. The easiest option is to use one of the presets if your exporter has one. Otherwise you can enter the Cauchy or Sellmeier coefficients, which can often be found in glass catalogs. Another alternative is to use data files with the **tabulateddata** texture. However those are more advanced features I won’t go into here.

Usually dielectrics behave very similar, so if you can’t find data for a specific dielectric you should be able to get away with choosing one of the presets which have a similar index of refraction.

The glossy material is a layered model. It assumes a diffuse base layer which is coated with a dielectric layer. This model fits well materials like varnished wood and colored plastics. Any light which is not reflected by the coating is transmitted to the base layer where the light is diffusely reflected or absorbed.

The diffuse base is treated like in the matte material, except the roughness does not apply to it. Thus it does not take any subsurface scattering effects into account either. The coating is treated as a dielectric obeying the Fresnel equations. You can define it via a specular color, which filters the reflected light, or the index of refraction. As mentioned above it’s more correct, and thus preferable, to use the index of refraction. The roughness parameter controls the surface roughness of the coating from almost mirror-like (zero) to perfectly diffuse (one). It is also based on a microfacet model, however the roughness parameter does not directly relate to the angles between microfacets.

The glossy material also has an absorption parameter, which determines the absorption of the coating layer before the light hits the diffuse base. The associated depth parameter indicates the depth, or height, of the coating layer above the diffuse base, and is measured in nanometers. The effect of the absorption can be subtle, but is most noticeable when the material is viewed at near grazing angles. Keep in mind that you specify an absorption color. So if you set it to bright red, your object will get a cyan tint as the red color is absorbed by the coating before it’s reflected.

The metal material models metals, or more specifically conducting materials. Like dielectrics their reflective behavior is given by their index of refraction. Unlike dielectrics however no light is transmitted through the material. Almost all of the light is reflected and the rest is absorbed within a few nanometers of the surface. The only parameters used for the metal material is the index of refraction and the roughness. The surface roughness is treated the same way as with the glossy material.

Since metals reflect almost all the light, it’s important to have a good environment around them. Otherwise they’ll just appear black and lifeless. In the real world metals look like metals exactly because of the way they reflect the environment.

The color of a metal is due entirely to how the index of refraction changes with wavelength. The index of refraction can come from either a list of presets or from so-called *nk files* which can be found for instance on the luxpop site (which despite the name is not associated with LuxRender in any way). The nk files contain the index of refraction at different wavelengths, and it is assumed that the IOR changes smoothly between the given values.

Please keep in mind that the index of refraction of metals is very different from dielectrics. Technically the metals have a complex refractive index with non-zero imaginary part, called the extinction coefficient. In contrast the IOR of dielectrics only has a real component.

The extinction coefficient is so named because it determines how far into the material light will penetrate before being absorbed. In the case of metals however, the typical penetration depth is on the order of nanometers and thus LuxRender does not consider the metal material to be transparent so any light which isn’t reflected is assumed to be absorbed.

]]>Most transparent materials can be described by the Beer-Lambert law, which gives the transmission of light through a substance as \( T = \frac{I}{I_0} = e^{-\alpha \ell} \). Here \(I\) is the intensity of the light after going through the material, \(I_0\) is the intensity before entering, \(\ell\) is the distance the light traveled through the material and \(\alpha\) is the absorption coefficient. LuxRender works with distances in meters so \(\alpha\) should have units \(m^{-1}\), that is how much is absorbed after one meter.

Since absorption is a “volume thing”, as opposed to a “surface thing” like reflectance, objects are now associated with two new volume properties: exterior and interior. These describe the volumes on both sides of the surface of the object. The *exterior* volume is defined as being on the same side as the surface normal points, while *interior* is on the opposite side.

In order to use the new feature you have to use a new material, so far named “glass2”. Instead of the normal material parameters it relies on the mediums in the external and internal volumes. In order to simplify things mediums are named (similar to materials) and can thus be reused. This comes in handy when you have say a glass with a liquid inside it. More on that later. Here’s a screenshot from LuxBlend which shows the new material.

In the above definition, the exterior medium is set to the special “world” medium, defined in the “Cam/Env” page, which by default is set to air. The interior medium is set to the “clear” volume type, which is the only one supported at the time. It defines a transparent medium with constant index of refraction and absorption according to the Beer-Lambert law. The “Options” button next to the medium name allows you to manage the mediums, such as creating a new one and renaming an existing one.

The absorption definition warrants a closer look. Absorption is in its nature subtractive, meaning that if the medium absorbs a lot of red and green, the object will look blue. And, as mentioned above, the absorption coefficient needs to be specified in “absorption per meter”. In order to make it more intuitive you can use the “Color at depth” button. Enabling this button allows you to specify the color that white light should have after travelling a certain distance through the medium. That is, imagine a white light source behind a slab of your material of the thickness you specify. When you look at the opposite side of the block you should see the color you specify in LuxBlend. LuxBlend converts this information into a suitable absorption coefficient when exporting.

As you can see in the image above, I’ve set the “Color at depth” to a slightly dark orange at a depth of 0.5m. In the image below you can see how the above material looks when applied to the standard Suzanne model which is roughly 2m x 2.5m x 1.5m.

As mentioned LuxRender uses the surface normal to determine if it should use the external or the internal volume at an intersection. This means that you have to be careful to be consistent with your definitions. For example, to add an air bubble inside the glass monkey we make a sphere and place it inside the monkey model. Assuming the normals of the sphere points “out”, we then assign it a material with the internal medium set to “world” (or air), *and ensure the external medium is set to the same as the internal medium of the monkey*. This is needed because LuxRender doesn’t actually know about the actual volume between the surfaces. Lets take a look at a quick example.

Of course we could have created a new interior medium, like a liquid for instance with it’s own absorption, but I wanted to keep things simple. As you can see below, the air bubble is clearly visible, both in the lower absorption and in the refractions.

You can download the glass Suzanne scene here in case you want to play with it yourself.

Since LuxRender is physically based, measured data from the real world can easily be applied. Thanks to a new “tabulateddata” texture (note, it might change name), LuxRender can use tabulated data files as input for colors, much in the same way `.nk` files could be used with the metal material. It ignores any text until it finds a header line or a data line. It then reads until it fails to read a data line. The header line is optional and has the following format:

wavelength: unit, data: description

The `wavelength` field indicates which units the wavelengths are given in. Currently it accepts the following units: `nm`, `um`, `eV`, `cm-1`. The `data` field is currently ignored, so you can write anything for the description. If no header is found it assumes that the wavelength unit is nm. A data line consists of two floating point numbers per line (any additional numbers or data is ignored). Here is an example file.

wavelength: nm data: absorption (1/cm) 380 0.0001137 500 0.000204 600 0.002224 720 0.01231

Data is linearly interpolated between the samples. The bottom line is that the tabulateddata texture can read lots of data files with little or no modifications. It’s worth noting that the absorption values in the above file has units of \( \text{cm}^{-1} \), while LuxRender expects the absorption to be in units of \( \text{m}^{-1} \). Thus we need to scale up the absorption by a factor of 100. You can use the “s” field by the absorption color in LuxBlend to do this (see screenshot below).

In order to show the power and beauty of using measured data, I made a small pool scene in blender with realistic dimensions. The pool model is about 7.5m x 3.5m and 2.5m at the deepest. I then found a page named Optical Absorption of Water Compendium which has several data files containing measured absorption of seawater. I opted for using the Pope and Fry ’97 data set, which covered the entire visible spectrum.

Another quick search led me to the Index of Refraction of Water page, where I found measured values for the IOR of seawater. Since the page only listed a few data points, all well inside the visible spectrum, I decided to fit the data to Cauchy’s equation. The wavelength-dependent parameter in Cauchy’s equation is given as \( \frac{1}{\lambda^2} \) where the wavelength \( \lambda \) is in micrometers. Converting the wavelengths from the page into this format allowed me to perform a linear fit, yielding the two coefficients: \( A = 1.32401 \) and \( B = 0.00307694 \). These can the be entered into a “cauchy” IOR texture in LuxBlend. Below is the material setup for the water in the pool scene.

As you can see I have enabled dispersion which utilizes the “cauchy” IOR texture. What remains is to define the material for the pool walls, where I used a simple diffuse material. Note that the wall material does not need to have any mediums defined, LuxRender will automatically do the right thing here since the walls aren’t transparent. Below you can see how this turned out.

Jump in and enjoy

]]>**Installers**

LuxRender 0.6.1 x64

LuxRender 0.6.1 x86 SSE2

LuxRender 0.6.1 x86 SSE1

**Archives**

LuxRender 0.6.1 x64

LuxRender 0.6.1 x86 SSE2

LuxRender 0.6.1 x86 SSE1

Happy rendering!

]]>I covered the basics needed in my post Numerical approximations to differential equations. In this case we have a system of *coupled* differential equations, meaning that each differential equation depends on the other. However the basic approach we’ll use to solve them are very similar to the way I described in that post.

In order to make the code usefull for other problems, I decided to try to make it a bit generic. I declared a general interface for ODE integrators. The integrator will maintain a state comprised by the independent variable, say time, and one or more dependent variables, like position and velocity. Of course, the interpretation of the variables is of no importance to the integrator.

After setting the initial state one repeatedly calls the `Update` method to advance the integrator. In order to perform the update the integrator will require the derivatives of the dependent variables. So, the `Update` method requires a function pointer (or delegate in C#) which it can use to calculate the derivatives for an arbitrary state. This makes it possible to use integrators which internally perform multiple steps.

```
public interface IIntegrator
{
void SetInitialState(double x, double[] y, double h);
void GetState(out double x, out double[] y);
void Update(DerivativesDelegate derivatives);
}
```

To make my life easier, I made an abstract base class which implements `IIntegrator`. It has an internal state and implements the methods for setting and getting the state. Specific integrators can then just override the `Update` method.

The first integrator to be implemented is, of course, the forwards Euler method. Since the implementation is so simple, it’s nice to have during debugging in order to determine if there’s a problem in the integrator or the calculation of the derivatives. The implementation looks like this.

```
public class EulerIntegrator : IntegratorBase
{
public EulerIntegrator() { }
public override void Update(DerivativesDelegate derivatives)
{
double[] dydx;
// get derivatives
derivatives(x, y, out dydx);
for (int i = 0; i < n; i++) {
// apply forward Euler method
y[i] = y[i] + h*dydx[i];
}
x += h;
}
}
```

It first gets the derivatives \(\dx{y}\) at the current position \(x\), then uses these to take a step forwards. The length of the step is \(h\), which is set by `SetInitialState`.

Small note regarding the derivatives delegate. Since it's not possible to pass a `const` array in C# the derivatives method can modify the internal state of the integrator. In order to avoid this the integrator should pass a copy of the array or a read-only view of some sort. However I felt this would only clutter the code for this project.

The fourth-order Runge-Kutta integrator is a bit more complicated. It takes four "trial steps" in order to perform a single update. Each of these steps are done using the regular forwards Euler method but with different step lengths and derivatives. The steps are as follows:

- Use the derivatives at \(x\) to perform a half-step to \(x+\frac{h}{2}\). Find the derivatives.
- Use derivatives at \(x+\frac{h}{2}\) to perform another half-step from \(x\). Find the derivatives again.
- Use updated derivatives at \(x+\frac{h}{2}\) to take a full step from \(x\) to \(x+h\) and find the derivatives.
- Compute a weighted average of all four derivatives, use this to step from \(x\) to \(x+h\).

Instead of using the finite difference method to approximate the differential equation, it is derived by using the fundamental theorem of calculus and approximating the integral using Simpson's rule. In addition the midpoint is found by averaging two approximations.

Enough boring background, here's my implementation.

```
public class RK4Integrator : IntegratorBase
{
public RK4Integrator() { }
public override void Update(DerivativesDelegate derivatives) {
double[] k1, k2, k3, k4;
double[] yt = new double[n];
double h2 = h / 2;
double h6 = h / 6;
double x2 = x + h2;
double xh = x + h;
// compute intermediary steps
// get derivatives at x
derivatives(x, y, out k1);
// get derivatives at x+h/2
for (int i = 0; i < n; i++) {
yt[i] = y[i] + h2*k1[i];
}
derivatives(x2, yt, out k2);
// find new derivatives at x+h/2
for (int i = 0; i < n; i++) {
yt[i] = y[i] + h2*k2[i];
}
derivatives(x2, yt, out k3);
// find derivatives at x+h
for (int i = 0; i < n; i++) {
yt[i] = y[i] + h*k3[i];
}
derivatives(xh, yt, out k4);
// get final state by using weighted
// average of derivatives
for (int i = 0; i < n; i++) {
y[i] = y[i] + h6*(k1[i] + 2*k2[i] + 2*k3[i] + k4[i]);
}
x = xh;
}
}
```

Since the implementation is more complicated, it's a good idea to test that it all works. I used both integrators to solve a simple ballistic problem, which I could solve using pen and paper. This allowed me to verify their correctness and accuracy.

The ballistic problem I used as a test case consists of simulating a canon ball fired upwards and find how long it takes for the canon ball to hit the ground. That means that we ideally would like to stop the integration at exactly the point when the ball hits the ground. One way of doing that would be to stop once the ball has passed beyond the ground level and then estimate the exact time of crossing based on the current and last position. This works fine for the ballistic problem. However, as I mentioned in part 1, in the white dwarf simulation \(\gamma\) isn't defined when \(\bar{\rho}\) is negative. Ideally we want to stop before that happens. And the problem is even worse when using adaptive methods as the time step can be so large that you overstep by a huge amount.

In order to resolve this problem, I came up with the following scheme. It was inspired by how MATLAB handles this issue. After each update, the integrator checks if the solution has overstepped. If it has, it repeats the last update but with a smaller time step. It repeats the process until either the stopping criteria is fulfilled, the time step becomes very small or it reaches the maximum number of iterations allowed.

In order for the integrator to determine if the stopping criteria has been reached or exceeded, I introduced the following delegate.

```
public enum StoppingCriteriaDirection { Both, Positive, Negative };
public delegate double StoppingCriteriaDelegate(double x, double[] y,
out StoppingCriteriaDirection direction);
```

In addition I introduced an overloaded `Update` method.

```
public interface IIntegrator
{
void SetInitialState(double x, double[] y, double h);
void GetState(out double x, out double[] y);
void Update(DerivativesDelegate derivatives);
void Update(DerivativesDelegate derivatives,
StoppingCriteriaDelegate stoppingCriteria,
out bool stopped);
}
```

The integrator then tries to locate the roots of the `StoppingCriteriaDelegate`. The `StoppingCriteriaDirection` is handy to prevent premature termination of the integration. Consider for instance if we in the ballistic problem had shot the cannon ball from ground level, but wanted to know how long it took to fall down on the roof of a house. Our implementation of the `StoppingCriteriaDelegate` would then return `y - y_roof`. In this case the stopping criteria would be met twice: first on the way up, in which case we want to keep going, and then on the way down. So we specify that we’re only interested in the case where the zero is approached from above, that is when the derivative is negative.

The neat thing about this implementation is that it reuses the existing framework in such a way that the new `Update` method only has to be implemented in the `IntegratorBase` class and can thus be used transparently on all descendants.

Seems I have to postpone the juicy bits for the next part so I can get this published before the apocalypse. In addition to the ballistic problem I’ve added a second test problem for which there is a simple analytical solution. Unlike the for the ballistic test the Runge-Kutta methods cannot reproduce this solution exactly.

In order to show how the framework can be used I’ll show the main loop of the ballistic simulation.

```
while (!stopped)
{
integrator.Update(
// derivatives delegate
delegate(double _x, double[] _y, out double[] dydx)
{
dydx = new double[2];
dydx[0] = _y[1]; // dx/dt = v
dydx[1] = -9.81; // dv/dt = a in ms^-2
},
// stopping criteria delegate
delegate(double _x, double[] _y, out StoppingCriteriaDirection direction)
{
direction = StoppingCriteriaDirection.Negative;
return _y[0]; // stop when hitting the ground
},
// stopped flag, true if stopping criteria is met
out stopped);
integrator.GetState(out x, out y);
}
```

The independent variable is `x` (time) while the dependent variables are `y[0]` (position) and `y[1]` (velocity). In this case I’ve opted to use anonymous methods for the two delegates. As you can see the stopping criteria delegate simply returns the position, with the ground being at \(x = 0\). Since we’re only interested in when the cannon ball hits the ground on the way down, the direction is set to `Negative`. Just to show how bad the Euler integrator is, here’s the test run from the ballistic simulation with an initial upwards velocity of \(13 m/s\) and a step length of \(0.1 s\):

```
Testing integrator 'ODE.Integrators.EulerIntegrator'
Time: 2.749448 Position: 0.000000 Velocity: -13.972081
Testing integrator 'ODE.Integrators.RK4Integrator'
Time: 2.650357 Position: 0.000000 Velocity: -13.000000
```

Since we’re not considering air resistance the cannon ball should have the same velocity when it hits the ground, except in a downwards direction of course. The time taken should be \(\sim2.650356779s\). It’s clear that the Euler integrator is having a difficult time with the “large” step time, however the RK4 integrator is spot on. This is not surprising given that it is a fourth order method and the ballistic problem is quadratic in nature.

If you want to play around you can download the entire source code here: WhiteDwarfSim.zip. I’ve developed it using MonoDevelop, but tested it on Visual Studio as well. So far the name is a bit misleading but I will justify it in part 3…

]]>Since we’ve had to move hosts recently, I’ve added the files here as a mirror in case something goes wrong:

**MS-Windows®**

Windows 32bit SSE2: luxrender_v06_RC6_win32_SSE1.zip (4.14 Mb) **recommended**

Windows 32bit SSE1: luxrender_v06_RC6_win32_SSE2.zip (4.15 Mb) *compatibility for older CPU’s*

Windows 64bit: luxrender_v06_RC6_win64.zip (5.15 Mb)

**Mac OS X®**

OS X 10.4+ 32bit: LuxRender-06RC6-Install-OSXintel32.dmg.zip (12.9 Mb)

OS X 10.4+ 64bit: LuxRender-06RC6-Install-OSXintel64.dmg.zip (13.3 Mb)

**Linux**

32 bit: lux_v06rc6_ia32.tar.bz2 (11 Mb)

64 bit: lux_v06rc6_x64.tar.bz2 (9.1 Mb)

I won’t perform the derivations of the equations used, as that is not the main focus of this post. However a short introduction to white dwarf stars might be in order. A star such as our sun survive by burning the hydrogen in their core. Immense gravitational forces, caused by the matter of the star itself, squeezes the core so much that hydrogen fuses into helium. This nuclear fusion releases heat which provides an outwards pressure. This pressure counteracts the gravitational force, leading to a fine balance.

When the fuel runs out, there is nothing to stop gravity. The star gets crushed under its own weight, so to speak. The star will continue to shrink until the atoms making up the star are squashed together and quantum mechanical effects become dominant. What happens next is weird. The electrons try to crowd together as best as they can, however like people in a bus, only one electron can occupy a specific quantum state (bus seat) at time. This is called the Pauli exclusion principle. When all the states are occupied, the other electrons have nowhere to go (they don’t like to stand). This creates a kind of force, preventing the star from collapsing further.

Unless the star is much more massive than our sun, then there’s not enough mass to overcome this force, and the star will remain a small but massive ember.

For this project, we want to find the radius of the white dwarf by considering the forces inside the white dwarf. As mentioned, I’m basing this on an earlier assignment which can be found here. It contains a brief derivation of the equations. Assuming the star is in an equilibrium, the pressure must counterbalance the gravitational force. The gravitational force depends on the mass, which again depends on the density of the star. We end up with the following set of coupled first-order ordinary differential equations \[\frac{d\rho}{dr} = -\left(\frac{dP}{d\rho}\right)^{-1}\frac{Gm}{r^2}\rho\] \[\frac{dm}{dr} = 4\pi r^2 \rho.\] Here \(\rho = \rho(r)\) is the mass density of a small volume at a distance \(r\) from the center of the star, \(m = m(r)\) is the integrated (total) mass within a radius \(r\), \(P\) is the pressure and \(G\) is the graviational constant. The radius \(R\) of the star is found when \(\rho(R) = 0\), giving the star a mass \(M\) of \(M = m(R)\). These equations relate the change in density and the change in mass.

Since several quantities involved are either very large or very small, the equations should be written on dimensionless form before they can be implemented. Otherwise one coulde experience precision problems due to the limits of the floating-point representiation used by computers. On dimensionless form the equations read \[\label{eq:drhodr}\frac{d\bar{\rho}}{d\bar{r}} = -\frac{\bar{m}}{\gamma}\frac{\bar{\rho}}{\bar{r}^2}\] \[\label{eq:dmdr} \frac{d\bar{m}}{d\bar{r}} = \bar{r}^2 \bar{\rho},\] where \(r = R_0\bar{r}\), \(m = M_0\bar{m}\), \(\rho = \rho_0\bar{\rho}\). \(\gamma\) is given as \[\gamma(x) = \frac{x^2}{3\sqrt{1+x^2}},\] where \(x = (\rho / \rho_0)^{1/3} = \bar{\rho}^{1/3}\). The constants \(R_0\), \(M_0\) and \(\rho_0\) depend on \(Y_e\), the number of electrons per nucleon. This is an input parameter, indicating which element the white dwarf consists of. For iron (\(^{56}\text{Fe}\)) this is \(26/56\). The constants are given as follows \[R_0 = 7.72 \times 10^6 Y_e \text{ m}\] \[M_0 = 5.67 \times 10^{30} Y_e \text{ kg}\] \[\rho_0 = 9.79 \times 10^8 Y_e^{-1} \text{ kg m}^{-3}.\]

When integrating the above equations, we start at the center of the star and integrate outwards until we reach the stopping condition \(\rho(r) = 0\). Due to the nature of floating point numbers we should instead to stop when \(\rho(r) = \epsilon\), where \(\epsilon\) is a small positive number. I used \(\epsilon = 10^{-9}\) in my simulations. When \(r = 0\), ie the center of the star, \(\bar{m}(r) = 0\) and the density \(\bar{\rho}(0)\) is given by \(\bar{\rho}_c\), the core density. This is an input parameter which we’ll vary.

Unfortunately we can’t start integrating at the core just like that. The reason is that \(\ref{eq:drhodr}\) is not defined when \(r = 0\). This means that we need to start the integration at some small initial radius \(h\). In order to do so we need to approximate \(\bar{\rho}(h)\) and \(\bar{m}(h)\). Using a backwards Euler scheme, we can find an approximation of the initial values. From \(\ref{eq:drhodr}\) and \(\ref{eq:dmdr}\) we get the following scheme \[\label{eq:be_rho}\frac{\bar{\rho}(h)-\bar{\rho}(0)}{h} = -\frac{\bar{m}(h)}{\gamma(x(h))}\frac{\bar{\rho}(h)}{h}\]\[\label{eq:be_m}\frac{\bar{m}(h) – \bar{m}(0)}{h} = h^2\bar{\rho}(h).\] If we rewrite these equations we get \[\label{eq:be_rho}\frac{\bar{\rho}(h)-\bar{\rho}(0)}{h} = -\frac{\bar{m}(h)}{\gamma(x(h))}\frac{\bar{\rho}(h)}{h}\] \[\label{eq:be_m}\frac{\bar{m}(h) – \bar{m}(0)}{h} = h^2\bar{\rho}(h).\] If we rewrite these equations we get \[\label{eq:be_rho2}\left(1 + \frac{\bar{m}(h)}{\gamma(x(h))}\right)\bar{\rho}(h) – \bar{\rho}(0) = 0\] \[\label{eq:be_m2}\bar{m}(h) – h^3\bar{\rho}(h) – \bar{m}(0) = 0.\]We see that \(\ref{eq:be_m2}\) implies that \[\label{eq:be_mh}\bar{m}(h) = h^3\bar{\rho}(h) + \bar{m}(0),\]so we can insert this into \(\ref{eq:be_rho2}\) and using \(\bar{\rho}(0) = \bar{\rho}_c\) and \(\bar{m}(0) = 0\) we get \[\label{eq:be_rho3}\left(1 + h^3\frac{\bar{\rho}(h)}{\gamma(x(h))}\right)\bar{\rho}(h) – \bar{\rho}_c = 0.\] From this we can see that if we select a very small initial radius \(h\) then \(\bar{\rho}_c\) is actually a very good approximation to \(\bar{\rho}(h)\). Thus we set \(\bar{\rho}(h) \approx \bar{\rho}_c\). We then insert this approximation into \(\ref{eq:be_mh}\) to find the missing \(\bar{m}(h)\), and we can start the integration!

There remains only one small issue: \(\gamma(x)\) is not defined for negative values of \(\bar{\rho}\) (we’re not dealing with imaginary stars here ). Even though we stop the integration when \(\bar{\rho}\) is close to zero, we run the risk of overstepping during the integration step. In my original implementation I used \(\text{max}(\bar{\rho}, 10^{-12})\) instead of \(\bar{\rho}\) when calculating \(\gamma\).

At this point, all that remains is to implement an integrator and off you go. In the assignment we had to implement and use the common fourth-order Runge-Kutta scheme, RK4. As mentioned I wanted to try to implement an adaptive RK scheme. In the next post I’ll hopefully provide a working implementation. As an inbetween snack, I leave you with this beautiful image of the white dwarf in NGC 2440 (small dot in center), surrounded by its ghostly remains.

]]>As I mentioned in the other post, it is sometimes not possible to find an expression for the solution to a PDE. So, how can be be sure that the solution even exists? And does it even make sense to talk about the solution if we can’t write it down? Without derailing this post too much, I’ll just say that proving the existance of a solution is an important topic when studying PDE’s. And it makes about as much sense to talk about the solution as talking about \(\pi\) does. While the concept of \(\pi\), the ratio between the circumference of a circle and its diameter, is well defined and exact, we cannot write \(\pi\) down. The best we can do is to write down an approximation. So is the case for many PDEs.

There are several different ways to approximate the solution to a PDE, just as there are several different ways to approximate the value of \(\pi\). The first one I will describe is called the Finite Difference Method (FDM) and is rather intuative. It is usually very easy to implement on a computer, which is why it is frequently used. Before we tackle a full PDE let’s start by looking at the population growth model again, given by \[\dt{N(t)} = rN(t).\]As mentioned, the left hand side is the derivative of \(N(t)\), in other words how much \(N(t)\) will change if \(t\) changes (the slope of \(N(t)\) at time \(t\)). Intuatively we could approximate the derivative by letting \(t\) change slightly and measure how \(N(t)\) changes. We could for instance measure the number of individuals at one time, then again after three months. If we subtract the two numbers we get the change in population, and by dividing this by the change in time between the two measurements we get the approximation of the derivative. Thus, if we count \(N_1\) individuals at some time \(t_1\) and \(N_2\) at some later time \(t_2\) the approximate derivative is given by \[ \dt{N(t)} \approx \frac{N_2 – N_1}{t_2 – t_1}.\]In the general case this approximation is only good in the neighborhood of the interval given by \(t_1\) and \(t_2\). It’s a fairly good approximation at \(t_1\) and \(t_2\) and a very good approximation at the midpoint between \(t_1\) and \(t_2\) (ie after 45 days in the example). This method is called “finite differences”. Let’s see how the example mentioned looks.

In the graph above the derivatives are given by the slope of the tangent lines. Since the population model has a fairly smooth curvature, the derivative is fairly accurately approximated. If we instead measured the population after only a month, we get a much better approximation.

Considering it’s name, it’s perhaps not very surprising that this method of approximating the derivative is the main idea behind the finite difference method. We replace the derivatives in the differential equation by finite difference approximations. Let’s apply it to the population model. We’ll approximate the left hand side as follows \[\dt{N(t)} \approx \frac{N(t+\Delta t) – N(t)}{\Delta t},\]where \(\Delta t\) is some small positive number. Note that since \(t_2 = t + \Delta t\) we get \(t_2 – t_1 = t + \Delta t\ – t = \Delta t\). This is called a forward finite difference approximation, because it uses information ahead of \(t\). So our approximation of the model looks like \[\frac{N(t+\Delta t) – N(t)}{\Delta t} = rN(t).\]By rearranging the terms we get \[N(t+\Delta t) = N(t) + \Delta t rN(t).\]Let’s look at this expression more closely. The left hand side is the population count at time \(t + \Delta t\), ie in the future relative to \(t\). On the right hand side we have the population count at time \(t\) pluss the number of offspring at time \(t\) multiplied by the difference in time \(\Delta t\). This means that assuming we know \(N(t)\) we can easily find the approximated population count at the time \(t + \Delta t\). By applying this scheme repeatingly, taking small steps forwards in time, we can form an approximation of \(N(t)\). Since we can calculate the value of \(N(t + \Delta t)\) directly in terms of know values, it is called an explicit scheme. Lets see how our approximation of the population model works out, using the following Python script.

```
from math import exp
# N is our population count
# initialize with N0
N0 = 10000
N = N0
# r is the reproduction rate per individual per year
r = 0.2
# dt is the time step for the simulation
dt = 3.0 / 12.0
# current time in years
t = 0
# how many years to simulate
tend = 10
# iteration count
i = 0
while True:
# output current popluation count,
# along with analythical solution
print t, N, N0*exp(r*t)
# calculate the next value of N
N_next = N + dt*r*N
i = i + 1
t = i*dt
if t > tend:
break
# update current value of N
N = N_next
```

For this simulation \(\Delta t\) was set to three months and \(r = 0.2\). As one can see from the following graph the approximation is very good for a while, but becomes progressively worse.

In this case there we had a nice expression for the solution, so finding an approximated solution was a bit pointless. However using the FDM we can find an approximation just as easily even though *don’t* have a nice expression for the solution. This is why numerical approximations are so powerful, as most interesting problems fall into this latter category.

When dealing with a PDE, we perform the same steps. First we replace the derivatives by finite difference approximations. Then we rearrange the terms so that we get something which we can easily calculate on the computer. Finally we use this to find the approximated solution by repeatedly taking small steps forwards. I’ll write more about that in a later post.

]]>When you solve an ordinary equation like \( 2x^2 – 3x = 5 \) you want to find the specific values of \(x\) that satisfies the equation. With an ordinary differential equation, the solution is not a specific value for \(x\) but rather a function, say \(u(x)\) which satisfies the equation for all values of \(x\) (or for a specific range). The equation \(u(x)\) has to satisfy relates the derivative(s) of \(u(x)\) to some other function (and possibly itself), hence the name “differential equation”.

An example may help to illustrate the idea. Let’s look at a very simple population growth model. Lets say that in a given year, the ratio of people who reproduce during that year is \(r\). The whole population has \(N(t)\) individuals for a given year t, so the increase in population for year \(t\) is \(rN(t)\). Now let’s assume that the process is continous (after all, people make babies all the time). Then we have that the rate of change in the population at any given time \(t\) is \(rN(t)\). The rate of change in the population is the derivative of \(N(t)\). Using Leibniz’s notation we get the following differential equation: \[\frac{dN(t)}{dt} = rN(t).\] This is just another way of expressing the above. If we solve this differential equation, we find that \(N(t) = N_0 e^{rt}\). Here \(e\) is Euler’s number, and \(N_0\) is the population count at \(t = 0\). So if we say that \(r = 0.1\), and we start with \(N_0 = 10000\) individuals, we find that after 10 years the population count is \(N(10) = 10000*e^{0.1*10} \approx 27183\).

When we solved the above equation, \(N_0\) “magically” appeared in the solution. So where did it come from? Well, if I tell you that my collection of fluffy animals grows steadily with one animal per month and ask if you can figure out when I’ll rearch 500 fluffy animals, there’s no way you can realistically answer that without also knowing something else (like how many fluffy animals I got right now). You need some extra information to “pin down the answer”. This information is usually provided as initial conditions (such as \(N_0\) above) or boundary conditions.

At first glance, a partial differential equation isn’t that much different. Instead of “regular” derivatives, it involves partial derivatives. In the above example, \(N(t)\) is a function of \(t\) alone. However lets say you leave a frozen steak on the kitchen table to defrost, then the temperature inside the steak depends not only on position \(x\) (where you measure it) but on time \(t\) (when you measure it). In math terms the temperature is given by \(T(x, t)\). Now you can measure the change in temperature in several different ways. You could for instance measure the change in temperature at different depths. This would be the partial derivative of the temperature with respect to position \[\pdx{T(x,t)},\]where the \(\partial\) symbol indicates that it’s a partial derivative we’re dealing with instead of a “regular” derivative.

By solving the heat equation one can find the temperature distribution throughout the steak at some arbitrary time after it was put on the table. The heat equation relates the partial derivative of the temperature with respect to time to the second partial derivative of the temperature with respect to position (how much the change in temperature is changing with position):\[\pdt{T(x,t)} = k\pddx{T(x,t)}.\] Again you’ll need some extra information to get some meaningful results. Typically you’ll need the initial temperature of the steak, the temperature in the room (initial conditions) and how the surface of the steak loses heat to the environment (boundary condition). If you’re a true mathematician you’ll just assume the steak is shaped like a perfect cylinder, otherwise you’ll also need to know the shape of the steak.

Depending on the shape of the steak it can be very difficult (or impossible) to find an expression for \(T(x, t)\), which is why the mathematician assumes a simple shape. One way of dealing with this is to try to find an *approximate* solution, typically using a computer. Though I think I’ll leave the details of that for another post

I quickly installed GridMove, which is a completely awesome free program for managing all that real estate.

If you’re looking into getting a new monitor, this one should be considered!

]]>I started simple with a drop hitting the floor, and rendered it using LuxRender. My poor Q6600 spent roughly 80 cpu-hours on it. Here’s the result:

]]>