# Capstone lessons

The goal of these lessons is to see how the concepts covered in the main
lessons can be integrated. It is recommended to go through them
*after* having done the lessons. Capstones are designed to be shorter, and will
present some real-life applications of the principles covered in the main
lessons.

Capstone lessons present more advanced material, which may take a bit longer to grasp. We encourage you to try them one at a time, and to come back to them over the course of a few days.

# Approximate Bayesian Computation

**Approximate Bayesian Computation:**start the lesson

**Reading time:**13 minutes

**Status:**draft

**Key concepts:**arrays control flow

**Extra packages:**StatsPlots StatsBase Statistics Distributions

Approximate Bayesian computation, or ABC for short, is a very useful heuristic to estimate the posterior distribution of model parameters, specifically when the analytical expression of the likelihood function is unavailable (or when we can’t be bothered to figure it out). The theory on how ABC works will not be covered here in detail, so reading the previous article is highly recommended.
We will rely on a few packages for this example:

# Genetic algorithm

**Genetic algorithm:**start the lesson

**Reading time:**11 minutes

**Status:**draft

**Key concepts:**data frames generic code type system

**Extra packages:**StatsPlots StatsKit Statistics

Genetic algorithm is a heuristic that takes heavy inspiration from evolutionary biology, to explore a space of parameters rapidly and converge to an optimum. Every solution is a “genome”, and the combinations can undergo mutation and recombination. By simulating a process of reproduction, over sufficiently many generation, this heuristic usually gives very good results. It is also simple to implement, and this is what we will do!
A genetic algorithm works by measuring the fitness of a solution (i.

# Runge-Kutta integration

**Runge-Kutta integration:**start the lesson

**Reading time:**11 minutes

**Status:**draft

**Key concepts:**writing functions numerical precision arrays keyword arguments

**Extra packages:**StatsPlots

Numerical integration, the search for solutions of differential equations, is a hallmark of scientific computing. In this lesson, we will see how we can apply multipe concepts to write our own routine for the second-order Runge-Kutta method. In practice, it is never recommended to write one’s own routine for numerical integration, as there are specific packages to handle this task. In Julia, this is DifferentialEquations.jl. This being said, writing a Runge-Kutta method is an interesting exercise.