Big ideas: Design vs Evolution

This was supposed to be a single post but I don’t like talking about concepts I haven’t previously introduced on the blog so I’m making the first post about design and evolution and then following it up with a post on movies and series.

Design

A design is an article/product/item of manufacture that is the result of an architect’s vision. The key in design is the presence of the final vision from the initial stages of the process. Likely to be the biggest achievement of design thinking is the wheel.

wheel-invention

Intuitively, the wheel is uniquely the product of design thinking as half a wheel or a wheel with angles would be close to useless. Therefore, if we are calculating the marginal value of each design step and the steps are sufficiently small we would never come up with the wheel. This is because at most steps, the direction towards the wheel will never appear like an improvement. It is only when the wheel is completed (or close to it) that its value starts to emerge. Accordingly, it seems alternative improvements will always be chosen over those that will lead to the wheel.

Graph for blog

When the wheel is only 20 or 30 percent completed and only a step or two ahead can be envisioned, it is unlikely that the design process would continue and lead to the final product.

Design thinking does however make the product highly fragile. Since in essence it relies on the synergy of the components and not their value in isolation, damaging a component creates a non-linear loss in value. For instance, adding a couple of angles to a wheel quickly deteriorates its value as it can no longer roll smoothly. This fragility does not have to pertain to the object itself, it can emerge from the environment for instance a wheel’s utility is fragile to not being on flat ground.

This same logic applies to buildings, they are a product of design thinking. You know a skyscraper is designed because knocking down a small proportion of the building causes it be useless (collapse). It is designed to lean on certain components more than others. (I might come back to this in the form of urban planning in some future post).

Overall the weakness of design is its fragility. It does have potential for unprecedented efficiency but it is very reliant on its designer. If the designer has no sense of history he may not incorporate proper systems into his design which fragilizes the whole endeavor.

Evolution

Evolution is all about small steps, where you try different things in different directions and choose the one that better improve the outcome. There is a caveat of conscious versus unconscious evolution but it’s not important for my purposes.

Through evolution it is possible that the outcome be enormously complex. Perhaps one of the more famous examples is echolocation in bats or dolphins, which is an enormously complex process that is more efficient than most of what modern technology has pieced together. Perhaps a more intuitive example of evolutionary outcomes is in scheduling your calendar. Gradually adding routine things to your calendar (gym, dance, learning, etc) is an evolutionary process because you evaluate each addition separately.

Things that evolve through evolution are generally very robust to variation and damage. For instance in the scheduling example, if suddenly one activity gets cancelled, the value of the others is usually not suddenly damaged and you still have the value of the other activities and the free time you gained from the cancelling. Similarly half a lung is still very useful, though it will obviously be able to produce only a fraction of the full lung’s oxygen. In fact most (if not all) of the organs of the human body share this feature.

Technical note: When dealing with evolved entities its usually less misleading to rely on the Law of Large Numbers.

The thing about evolution that makes it robust is that it does not forget the past. It is conscious of history and has systems to incorporate the past into its future behavior.

Putting it together

To me evolution vs design is an important question in almost all domains in life. Should one plan or play it by ear and deal with things as they come along?

The truth of the matter is that nature (which is almost fully a product of evolution) has a statistical validity that makes arguing against it a highly challenging endeavor. Whatever was not robust was ruled out by millennia of evolution. Therefore a proposal that is an alternative to nature, to be credible must have a significance level that can be juxtaposed to these millennia of evolution. I.E though taking a car (a designed object) to work may appear universally more efficient, but the lack of walking may lead to heart problems, unbalanced psychology, etc. The thing about evolution is that it feels no pressure to inform its user what the benefit of a certain heuristic/habit is, so great care must be taken when trying to replace such heuristics. Evolution may give us instincts to delay gratification but a design thinker may not accept such a delay because he/she does not see the benefit, and if you do not see it, then you cannot design around it.

Evolution doesn’t have to be uniquely genetic or cultural. For instance the heuristic to look left and right when crossing the street is probably both a cultural heuristic and a genetic tendency, it is a method of taking the past and using it in the future.

Similarly the economics debates which used to be in large part dominated by either evolution (pure capitalism) or design (communism), have now become dominated with a mixture.  Ideas of pure design have mostly been shut down, they have been replaced with arguments about the provision of platforms and are inherently of higher sophistication. Property rights and the rule of law are seen as such a platform and debates over the welfare state can be interpreted similarly.  It does seem however that there is quite a lot of emphasis on this kind of design thinking without much mention of fragility, and it’s obvious to me that with things like nukes, the internet, and even central banking(I find this one particularly interesting), we are quite a bit more fragile than we have ever been.

Evidence vs proof. How to think about Global Warming and GMO’S.

This might be a bit of derivative post because I am just deciding to come back to blogging.

This topic has been resonating in my head over the last couple of weeks as I hear people saying the words “there is proof”. The difference between “proof” and evidence doesn’t appear to be an obvious one so let me try to clarify.

There are two kinds of proofs. Mathematical proofs, and legal proofs. Mathematical proofs are axioms which are internally coherent, there are many levels of sophistication in mathematics and some higher level proofs may not be deemed worthy of the title of “proof” by the purest of mathematicians. Legal proofs on the other hand are not nearly as related to internal coherence. It is merely the nature of the law to categorize things into binary states (guilty or not guilty), it is not meant to indicate a perfectly coherent internal structure but is meant to classify people so that decisions can be made.

The important thing to note here is that science and proofs are unrelated. Science is a negative endeavor, it never deals with saying anything is true, it works only in calling out untruths. When you read any paper that says “we show that there is a relationship between…” what they mean to say is “we cannot reject the fact that there is a relationship between these things…” You cannot show that something is true you can only show that it is untrue. If you do an experiment within a closed system a million times, it is STILL not proven.

You might think that all this means that the correct question is then: “at what point is the evidence strong enough so that we can lean on it to make decisions?” While the volume of evidence does matter, what could matter to a much greater degree is what it implies.

Let’s take global warming as an example. There is some correlational evidence but it is hardly convincing given that we know spurious correlations arise naturally. There are also some simulations but simulations can be made to show anything and always miss some real world nuances (given chaos theory, a half realistic simulation would take years to run on the best super computers). Though everyone treats human driven global warming as “fact” it is far from it. This however is irrelevant, the consequences of global warming, perhaps eventually making the planet uninhabitable to humans are so gargantuan that the standard of evidence we need is as low as it could be. In other words, the consequences of overreacting are comparatively minimal to the consequences of underreacting. Indeed when there is some probability, however small, that the world will end, the burden of proof falls to the other side.

The opposite example is GMO’s (Genetically modified organisms). The question is simple, should we genetically modify food on a mass scale? Well it is still unclear what such modification will do to humans and it could have strange long term interactions (which will almost never be fully controlled for in an experimental setting). Even if the evidence is unclear, this is grossly insufficient because the effects could take any form and if a large population of people were to be involved, the consequences could explode. The burden of proof quite obviously falls on the people pushing GMO’s. Ninety ninth percentile significance levels is not a metric that warrants application, if ninety nine percent of flights were safe, there would be over ten thousand people dying a day from plane accidents.

Technical note: When using Bayesian statistics it is easier to objectively apply this logic because we can just set extreme priors.

Do hospitals make you sicker?

Selection bias is often a very destructive force when trying to determine what works and what doesn’t, it’s often impossible to evaluate if programmes are a success or not unless you’re under something extreme like a totalitarian regime.

Let’s take hospitalization for instance; if you were to ask people coming out of the hospital how their health status was, it would probably be worse for people coming out of the hospital rather than randomly selected people from the population.

So let’s assign some simple terms here then.  First let’s have the treatment dummy:

So its 0 if they don’t get treated and 1 if they do get treated. Then we observe the actual outcome (health status) for each individual with Y.

So to actually measure how much of an effect treatment has we need to compare people receiving treatment to those not receiving it.

Yet the world presents problems since what we actually observe in the real world is:

So there is no one number that represents what actually happens. We can measure the, the average effect of treatment on the treated (ATT), average effect of treatment on the untreated (ATU) or the average treatment effect (ATE).

In public institutions there is this top down selection problem and the more private an institution is the more bias comes from self-selection.

The ATT can tell us the effect of going to the hospital on the people who went to the hospital. The first part of the equation is the observed part, where we measure the health status of individuals who went to the hospital had they went to the hospital. The second part is unobserved because it measures the potential health status of patients who went to the hospital had they not gone to the hospital.

ATT can show us how much people gain from going to the hospital. In the real world private enterprises are much more likely to survive if people notice that there is a positive effect and so the market eliminates a low output hospital. A government hospital on the other hand might have trouble eliminating waste because it will receive customers who might not necessarily think the hospital is any good but will still go just because it is free.

The ATU tells us the expected effect of going to the hospital on someone who did not go to the hospital. The unobserved portion is the effect of the hospital on those who did not go. Second part which we observe is the effect of the hospital on those who went to the hospital.

So it seems pretty obvious that in the real world ATU and ATT are very close to impossible to measure accurately. However what we can measure accurately is the average treatment effect. This is represented by:

To accurately measure this we must make sure to apply a RCT (randomized control trial), in other words randomly allocate if people will go to the hospital or not, and here a paradox arises. We can understand if something works if we randomly allocate it, but if we randomly allocate it we are not maximizing the use of the hospital since we are sending in healthy people. Yet if we don’t randomly allocate it we cannot observe if the hospital is working.

This also is the case with control areas if we decide to do something differently in one country/state/city and leave the rest untouched we can then compare them to see if the area where we applied the new method is better off. So what’s the next step? If they are better off then we mass produce this method to the control areas and we can no longer see if it’s working(think long term effects), and if we don’t mass produce it then those control areas are not benefiting from this new method that could be improving their standard of living.

So the paradox here is between knowing something works versus making it work for everyone. Whether we apply a randomized control trial or use control groups, chances are we are not helping out the best we can, and if we do not apply these methods then we are ignorant as to whether the program is helping people at that specific period in time. To properly understand treatment effects we need a sacrificial lamb to exist. Though what’s generally done is to assume that if something worked in the past, it will keep doing so.