Can something really exist outside of a value system? Is it possible that one can be ‘objective’ and completely remove the ‘subjectivity’ in their views? In my previous post I described the method of Verstehen, a qualitative approach in social sciences to understand the reasoning behind actions, an approach Max Weber has used in his work.

Following on from the theme of writing about Max Weber, in this post, I discuss the concept of value-free sociology, the motivation behind using such a method in the historical context for Max Weber, and whether or not the approach has any merit.

Why is Value free sociology important for Max Weber?


I am here to invoke some ideas about what the point of prostitution really is. I come at this topic from a male perspective, not a feminist, or anything modern. Is it really about seeing women to get pleasure, to bust a nut? Is that simple or is there something more complex involved. I’m just trying to make sense of things and I’ll portray some ideas that I think are connected or crucial.

Prostitutes are an interesting subject. A real Taboo. It’s just one of those things, especially as a youth, you engage with these ideas more, naturally because your…

The internet is flooded with information on how to do machine learning, deploy machine learning etc etc, but how do we think about it. What is the process that is involved. I don’t mean what are the best libraries for deep learning, or how I need to think about seasonal decomposition of time series, I’m talking about, to use a very business term : Set ourselves up for success. In this post, I discuss the concept of Machine Learning (specifically) as an operation.

This is the first of a number of posts dedicated to the thought of Max Weber. Max Weber, along side Karl Max and Emile Durkheim, is known as one of the ‘founding fathers’ of sociology. He uses a particular method, known as Verstehen, with the aim of understanding the ‘motives’ of why people take certain actions. In this post, we look at this method and how it has been utilised.

We first discuss some of the foundational concepts and then bring to light how Weber formed Verstehen, and its definition. We then look at some criticisms of the method.


In the following we overview some methods which can be used to asses similarity/difference between distributions.The article will not provide rigorous mathematical proofs and constructs, but is aimed to discuss certain aspects as to develop a greater intuition about them. We look at some well known statistics for testing normality and between distributions and we explore the Wasserstein distance as metric to assess difference between distributions.

What is a Probability Distribution?

One of the main questions that the area of statistics attempts to tackle is the whole concept of data, and where does the data come from. What we aim to do when we develop…

This article builds on the previous ones which highlighted techniques which exist in scaling Gaussian Processes. In this post we discuss Posterior Approximation Methods, which aim to utilise the existing prior, but approximate the posterior. Building on from prior approximate methods, posterior approximation methods allow the joint optimisation of parameters and hyper-parameters.

Posterior approximation methods are a subset of Sparse Approximation methods within Global Approximation methods.

The aim in posterior approximation methods is to approximate the posterior while maintaining a full prior. We can see that the posterior inference mechanism of GP’s still requires the inversion of the kernel, and thus still has complexity of O(n³), which makes posterior approximation methods just as useful.

Approximation Methods

This article builds on the previous article which highlighted techniques which exist in scaling Gaussian Processes. In this post we discuss Prior Approximation Methods.

Within Sparse Approximation methods, we focus on Prior Approximation Methods.

The issue with scaling GP’s is the complexity of O(n³) that they have. This is due to the inversion of the kernel. We can overcome this problem by looking to approximate the eigenfunctions using m data points (these m points are known as ‘inducing points’ and will be discussed further on), a process known as Nystrom Approximation.

There are plenty of posts on the world wide web about Gaussian Processes(GP’s), and this will be one of them, however we focus on some methods which aims to alleviate the practical issues surrounding GP’s.

The post is constructed as follows:

  1. Overview of Gaussian Processes and how we get there.
  2. Scalable Gaussian Processes. We split this section into the following:
  • Global Approximation Methods
  • Local Approximation Methods

3. Implementing and GPU adapation


We will briefly highlight the main ideas behind GP’s so that we can build on them.

If we consider regular linear regression, the aim is to find the optimum…


Just trying to understand the world around me.

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store