On the inverse relationship between likelihood and falsifiablity

Given our current knowledge of the world. We believe that the likelihood of a certain event is 50 percent. That is, we can’t really explain it very well, it just seems random.

Now say some new hypothesis comes along, call it the hypothesis A, this new hypothesis predicts that the event that we were curious about before actually occurs at a different rate. A says that if we condition on some parameters that it explicitly specifies then the probability that this event will happen is actually 90%.

Now let’s say that we take 5 observations and the event has only occurred once. The probability of this occurring with hypothesis A is p(1-p)^4=0.00009. Whilst the probability of it occurring with the original knowledge is 0.03125. So we should prefer the original hypothesis to this one.

In the example above, we used 5 observations to try and compare our two hypotheses. So what did we actually do? Well, we compared the difference between their probabilities and depending on whether the difference was positive or negative we chose one or the other. Now it should be easy to see that the larger is this difference, the more confident we can be in our rejection or corroboration (“temporarily accepting” in Popper’s language)of the new hypothesis.

So now assume that we have some other hypothesis, hypothesis B, and this hypothesis predicts that actually, this event will be occurring 50.1% of the time. Now it should be trivial to see the difference in probability between our current state of the world and this new hypothesis prediction will be very small indeed. In other words, this hypothesis will be much harder to falsify.

What this means is that the more likely something is given our current state of the world, the harder it is to falsify, and the harder it is to falsify, the less scientific it is. Once again, if a hypothesis is very likely given what we know, then this hypothesis NOT scientific. Indeed it is the opposite, the more unlikely it is, the more scientific it is. Why is this? Because we can test it.

Further reading:
https://plato.stanford.edu/entries/popper/
https://en.wikipedia.org/wiki/Falsifiability

Advertisements

When do we need science?

I often get into arguments with people and invariably I end up invoking science in a way to present evidence. Often this invocation of mine is countered with an accusation of, ‘you must admit science isn’t perfect’, where I am forced to admit that this is indeed the case. What is quite frustrating is that I more often find myself arguing against the blind application of science rather than for it, so when this accusation is levied at me, I find that a gross misunderstanding has occurred. With this in mind, I wish to attempt to clarify when the application of science is or is not appropriate or necessary.

Science is NOT appropriate for living, intuition dominates daily life. We do not need science to tell us how much pressure to put on the pencil so the tip does not break, this will be learned by trial and error. Similarly like Nassim Nicholas Taleb likes to say, we do need to ‘lecture birds how to fly’, birds can fly without understanding aerodynamics and the complicated physical laws that accompany them. A snake does not need to understand the mechanics of friction or momentum to learn how to slither. In this very same way science will very often shed light on things that we can already perfectly navigate through without this formalized mumbo jumbo. Religion would also fall in the intuitive domain.

Even science that is ridiculously close to the truth can often be insufficient for taking action due to the Precautionary Principle (a previous topic). Just because we find that something is unlikely does not make it less important, acting upon a hypothesis cannot be weighted without taking into account the magnitude of its implications. It follows from this that  it is worthwhile to invest in preventative measures when the consequences of an event are devastating even if the event is highly unlikely. An application of this is that it could be worth overreacting(a misnomer) to disease contagions such as Ebola and NOT to airplane accidents because the former does have a higher propagation effect (the probability of Ebola killing a million people is much higher than the probability of an airplane accident killing a million people).

With these devastating critiques of science, one might think I’m as anti-science as they come. However, this conclusion would be blatantly false; science is the most systematic way for humanity to approach objective truth. This truth need not imply anything about how we structure society or our lives. Yes, it may be misused but so can every possible freedom afforded to society, indeed this is the very definition of freedom. We avoid restricting freedoms of the masses based on what some people may do , we do not quarantine every individual because some people may be murderers. Free inquiry is a fundamental principle of every open society, the goal of science is to attempt to codify our knowledge, not to tell us how to use it. The fact that each person may infer from it what he wills is the very basis of an individualist society and why we need robust Kantian moral imperatives. This principled stance is vital to the open society so that when an inconvenient truth emerges, we have no need to change our social behavior; This is because our behavior was not derived from scientific principles in the first place but from irrefutable morality.

Science is not perfect and many hypothesis we take as temporarily right (a hypothesis can never be proven) end up being false; however when we are dealing with ideology and things away from intuition, science is how we put our biases in check. Inherently when one discusses ideology based on hypothesis which are out of the intuition of daily life and are contradicted by many other individual experiences, science must be invoked. The very first step to the scientific method outlined by Popper, before ANY empirical work is conducted is to clean up your hypothesis. If your hypothesis predicts the same observations an existing hypothesis predicts then your theory isn’t precise enough to be meaningful, much in the same way the hypothesis of ‘shit happens’ is not a credible theory. Any theory that cannot be disproven is fundamentally flawed and must be discarded or at least not believed in until it can be fleshed out to make observable predictions.

Technical point: The production of evidence is often a function of a function. What this means is that we cannot ignore the bias or effort that goes through the production process. If for instance there are ten thousand ideologically biased researchers who are searching to prove hypothesis x, then by pure randomness we can conjecture that at the 99% level, then there will be 100 studies showing an effect when none exists.

Finally, often we can come up with numerous theories that fit the same observations; invariably this happens in every field, so how do we choose one? The most philosophically consistent position to adopt in such situations is to choose the hypothesis that makes the least assumptions. This is often called Occam’s razor and it is possibly one of the most important elements in the selection of any scientific theory.

 

What’s the potential of on-line education?

About MOOC

The hip term in academia for online education is MOOC (massively open online courses). MOOC’s medium (the internet) isn’t really a new thing, however what is new is the fact that it is free, and that top universities have started to roll out this free content on the internet.  These courses generally have very large cohorts of students, and although the completion rate is fairly low, it is still significantly large in aggregate terms. This logically implies that the average quality of teachers on the web is much higher than the average teacher (assuming the internet picks the best teachers). As MOOCs expand, they offer terrific possibilities for developing countries, the only requirement for this free knowledge is access to internet. A 16 year old Indian who has never been to school could potentially know more math than an MIT graduate. The only real advantage offline courses offer is direct contact with the teacher though this is not necessarily as important as it may first appear; it could be that the forums created will become extensive enough to answer every possible question a student might have (especially with some advanced algorithms we can produce today) but in terms of thesis feedback and supervision, there would be complications, and there are also some limitations on more practical subjects being mastered online.

Economic theories of education

Before discussing online education we must first lay down the ground work for why people get educated.

The first theory is the theory of human capital, which basically says that people go to school or university to improve their skills or knowledge. Improving themselves makes them more valuable to companies, and as long as your value capacity is higher than what the employer has to pay you then you should be able to get a job.

A second theory is signalling, here people don’t go to university or school to get better, they merely go there so that they can prove to employers that they have certain skills. For instance, getting a degree might signal that you are more intelligent or that you work hard, which are traits the employers desire. This theory could imply a limited capability for social mobility since signalling accreditation is not accessible to all.

Finally the third main theory is the status theory; this is people going to Universities because of cultural or societal ranking purposes. This is not very different from attending a church, though this model could imply a networking effect which boosts earnings.

The literature I am familiar with seems to indicate signalling is the more prominent one. The specific measurements seem to indicate that depth of education is secondary to selection criteria and brand value of the university, which both lean towards signalling.

edit: An example of how researchers try to separate human capital from selection is by looking at those who got accepted into top universities but did not attend, some studies .

Generally my ball park estimate for the value found from attending University is something like 80% signalling and a close match between the other two, human capital is probably slightly more important than good status.  This split isn’t the same in all degree types of course, humanities rely more on signalling; technical skills could be more reliant on human capital and MBA programs more reliant on status.

Application to online education

Under all three theories, online education has a place, however its potency differs. Under signalling dominating there is likely to be a wage premium for attending regular Universities since a large portion of their value is their selection criteria which are diluted when there is an online system. However if a shift should occur and human capital becomes more important, that is, people attending Universities to improve their skills, then that spells a bust for traditional universities.

Making online education also appeal to signalling will require a way to make testing credible by being exempt from plagiarism, it might seem plausible to just have screen sharing settings on or webcam active at all times during testing but it’s very hard to say who would watch these, it is perhaps better if an automated process is found. If credibility is attained in online education it may be possible for it to signal things that regular universities don’t, such as discipline, initiative, independence or even entrepreneurship.

Demand and Supply

In my mind there are two main types of demand for these MOOCs, mainly seeking mental stimulation (perhaps old retirees) and career advancement. There are likely to be cases where the human capital model applies for career advancement, as some jobs might offer on-site testing but it’s perhaps better if centers for testing were established that people can just show up for and use their knowledge to gain accreditation and then not have to re-take endless amounts of tests. In any case if the goal is accreditation the cost will generally be higher, but if the goal is the acquisition of skills or recreational purposes, it’s very cheap to provide since no involvement with test centers and diplomas is required.

A special demand market that the flexibility of online education can tap into is people who are full time workers. This pool of people is likely to be gigantic and the talent endless, these people are people who are so valuable to companies that the company cannot afford to let them off for a year to do an MBA or specialist program. Let’s also not forget that this extra choice for students will create competition with Universities, and with competition, there won’t be as strong of ability for Universities to select the very best candidates since the pool of people they will be selecting from will be smaller. This will dilute their selection criteria, and subsequently their signalling value.

On the supply side, the business model of providers this will shift attention to the lecturers, probably significantly cutting down non-lecturer staff of universities. It could be argued that if some of these fields being taught online aren’t expanding(I want to say fields like Anthropology generally evolve slower) enough every couple of years, the maintenance of updating the videos will be very low resulting in very few lecturers being required. For instance it could be that the same videos of mathematics will be watched 100 years from now, essentially killing the market for math lecturers. This could result in a winner take all effect, such as the music industry has witnessed, but not likely to be as prevalent since the language barrier might be important, this is because it is the main source of communication (where as music today barely relies on language).   This winner take all effect could be monetized through textbooks, although the market for textbooks will shrink in aggregate because of MOOC’s (controlling for shifts from developing to developed). This is an inevitable consequence of the winner take all effect, it is likely that successful textbooks will be boosted as the reputation of its author (who is an MOOC lecturer) rises and offers higher brand value to the University hosting the lecturer, which can also be monetized in a number of ways. This also implies much fewer universities being around unless people still value other things about them such as the cultural or extra-curricular aspect, but it is also just possible people meet that demand by participating more in their local clubs.  There are also legal boundaries preventing such supply shifts from occurring, such that you need to be accredited by government agencies and I am not certain how that affects online courses.

Present and future structures

Imagine a moving platform that holds a product and goes through different points to add new elements to the product. Now imagine that those elements are dependent on the previous one’s being properly installed. Well that’s how I view education as it is right now, only the products are people. This method causes way too many defects, and not necessarily by being more efficient either, since the energy expended to make sure each sequential piece was properly placed would be given by the people themselves. So the main cost is the switching cost, the initial cost of change. It doesn’t make sense for someone (regardless of their age) who hasn’t mastered a subject to move on to a more advanced subject that has the un-mastered subject as a prerequisite.  I don’t really need to produce evidence that it’s harder for a child that hasn’t learned the power rule to apply the chain rule.

It seems the easiest step to take in making education more dynamic is pushing it online, students have the ability to rewind, fast forward and pause and really go at their own pace, the Khan Academy model also seems fairly effective, they have a quiz after each concept is introduced, making sure students have mastered a concept by acing a quiz before being recommended to move on, so all students are A students. Not to mention that the world would be much more efficient if degrees were given out for every concept mastered, like that, people would not get over or under educated.

Perhaps the most backwards mechanics we apply is grading on curves that is giving x% of a class an A or a B. This gives no indication of mastered material, and makes the goal to be better than the rest as opposed to learning the material. This relative grading passes on an information cost to employers since they have to employ capital to learn what different kinds of grades mean, to see if they meet an absolute qualification and to see how they fare compared to graduates of other systems. An employer knows very little if an A or a B was received in a curved class and his only way of knowing how much stuff they learned in these classes is by knowing something about the school or university which channels money to the elite who have an already established reputation and costs the employers less. This is in part why it’s good to have national/international wide testing (eg. GCSE, IB, AP etc) that is widely accepted, so we can compare people. However this information cost must still be borne when comparing people who took different types of tests and in the case of Universities, the lack of such test types makes it very hard to compare students.

The structure of education needs to be taken into account, especially in government funding; it could be funneling us to towards some of these theories. For instance the French education system which I previously discussed has government pushing the status and signalling theories, which could amplify inequality. Online degrees probably haven’t had enough time to be able to project degree type value, in the future the mere fact that you have an online education could signal things like discipline and initiative to employers, and may probably offer value in that people can boost their job experience whilst simultaneously boosting their education credentials but these things will likely emerge over time.

As a final note, let’s not forget that online education is today much cheaper, which allows students to more flexibly choose their career, whilst traditional graduates will have to choose things that will repay their loans, even if their career advancement prospects in this given position are limited. However this cannot be properly observed without specific econometric techniques to get rid of the selection bias of people who did not attend regular university.

links 08/10/2012

Articles:

When do economists agree?(fun)

Exponential Economist meets finite physicist(fun)

Bernake and student conversation(fun..ish)

What came first, the chicken or the egg? Empirical Evidence!(fun…ish)

What are the key functions of Asset management?

What is the price of going short volatility?

Bestiary of Economists!(fun)

The economics of video games(fun..ish)

Gravity and international finance

Foreign banks and financial development

Corporate governance in financial institutions

Too crooked to fail? (fun…ish)

With Mitt Romney having released his effective tax rate and it being below 15% there has been a debate about optimal capital taxation. Here’s a good academic paper on it, and a slightly more comprehensible article on it, there is also an unsmoothed graph presented after that article.

Videos:

Want to be a crony?

Should we end the Fed?

Do Indie Video games have a competitive advantage?

Random economics knowledge bites:

Lecture on macro0-economics(includes stuff on optimal currency area)

Marginal Revolution Site for economics

Fed lectures series

Does Basel III make sense?

In the past couple of decades the sung mantra for deregulation has been overwhelmingly dominant yet today light touch regulation is shown to increase volatility in growth. With a new era of regulation taking place, perhaps the single most globally important measure is BASEL.  Although BASEL II was never fully fulfilled in the US, BASEL III is now in line to be implemented it’s important to understand the risk management mechanics of this series of regulatory implementations. BASEL aims to recommend to banks how to be solvent by creating a system that evaluates risk based on leverage and the rating of assets. The innovations of  BASEL III is that it introduces a buffer conservation which restricts shareholder compensation if the equity level is too low and a counter cyclical buffer which is an attempt at creating a more dynamic capital requirement which increases if the credit to GDP ratio rises. This is a great step in making BASEL more dynamic but as long as arbitrary static figures exist within it it’s likely to not be efficient.

It’s a romanticised notion that this is actually a policy which would reduce risk by limiting the amount of exposure allowed. However this can also be seen as a transfer of liquidity risk. From a retail banks point of view, the liquidity risk is passed to the citizen as he can now borrow less. The static figures of capital requirements also assume an excessive amount of knowledge as to how many good investments are really available, relying excessively on a top down approach measuring style of how much banks should lend. Additionally limiting leverage based on risk weight actually reduces diversification because it encourages investment in low risk assets as the number of low risk assets has not increased, this reduction in diversification could increase risk in the long run. This system also allows for an excessive amount of leverage if too many low risk assets are used. In the worst case scenario fruitful investment will not be undertaken and in the best case scenario predatory lending will decrease, having imperfect knowledge means at least one of these will occur.

However having risk weights also encourages banks to want to pass on the risks to other bearers who might not be optimal holders. For example holding mortgage-backed securities today on a balance sheet is a very expensive endeavour which encourages the passing of this risk. BASEL could have been the culprit behind the reckless behavior that caused the financial crisis since it indirectly encouraged securitization because of its ability to skip over capital requirements. This goes contrary to an optimal framework because to reduce risk on securitized assets the optimal practice is to keep them with bearers who have the most information about them, and the longer the chain of risk selling, the more fragmented and scarce on information on the product becomes. A more thought out policy would take measures to ensure that the holders of the risk were not too far from the entity whose risk they are holding. A more bottom up approach such as giving regulators guidelines on systemic risk is a much more potent way of controlling it. To boost regulatory performance, there should be incentives such as bonuses based on low systemic risk measurements to ensure there are parties actively pursuing the interest of taxpayers.

Do hospitals make you sicker?

Selection bias is often a very destructive force when trying to determine what works and what doesn’t, it’s often impossible to evaluate if programmes are a success or not unless you’re under something extreme like a totalitarian regime.

Let’s take hospitalization for instance; if you were to ask people coming out of the hospital how their health status was, it would probably be worse for people coming out of the hospital rather than randomly selected people from the population.

So let’s assign some simple terms here then.  First let’s have the treatment dummy:

So its 0 if they don’t get treated and 1 if they do get treated. Then we observe the actual outcome (health status) for each individual with Y.

So to actually measure how much of an effect treatment has we need to compare people receiving treatment to those not receiving it.

Yet the world presents problems since what we actually observe in the real world is:

So there is no one number that represents what actually happens. We can measure the, the average effect of treatment on the treated (ATT), average effect of treatment on the untreated (ATU) or the average treatment effect (ATE).

In public institutions there is this top down selection problem and the more private an institution is the more bias comes from self-selection.

The ATT can tell us the effect of going to the hospital on the people who went to the hospital. The first part of the equation is the observed part, where we measure the health status of individuals who went to the hospital had they went to the hospital. The second part is unobserved because it measures the potential health status of patients who went to the hospital had they not gone to the hospital.

ATT can show us how much people gain from going to the hospital. In the real world private enterprises are much more likely to survive if people notice that there is a positive effect and so the market eliminates a low output hospital. A government hospital on the other hand might have trouble eliminating waste because it will receive customers who might not necessarily think the hospital is any good but will still go just because it is free.

The ATU tells us the expected effect of going to the hospital on someone who did not go to the hospital. The unobserved portion is the effect of the hospital on those who did not go. Second part which we observe is the effect of the hospital on those who went to the hospital.

So it seems pretty obvious that in the real world ATU and ATT are very close to impossible to measure accurately. However what we can measure accurately is the average treatment effect. This is represented by:

To accurately measure this we must make sure to apply a RCT (randomized control trial), in other words randomly allocate if people will go to the hospital or not, and here a paradox arises. We can understand if something works if we randomly allocate it, but if we randomly allocate it we are not maximizing the use of the hospital since we are sending in healthy people. Yet if we don’t randomly allocate it we cannot observe if the hospital is working.

This also is the case with control areas if we decide to do something differently in one country/state/city and leave the rest untouched we can then compare them to see if the area where we applied the new method is better off. So what’s the next step? If they are better off then we mass produce this method to the control areas and we can no longer see if it’s working(think long term effects), and if we don’t mass produce it then those control areas are not benefiting from this new method that could be improving their standard of living.

So the paradox here is between knowing something works versus making it work for everyone. Whether we apply a randomized control trial or use control groups, chances are we are not helping out the best we can, and if we do not apply these methods then we are ignorant as to whether the program is helping people at that specific period in time. To properly understand treatment effects we need a sacrificial lamb to exist. Though what’s generally done is to assume that if something worked in the past, it will keep doing so.

Economics vs Politics. Is free trade any good?

Economics experts fight over numerous of things, it’s the shame of the profession. The fact is that anything can be proven if you use the right econometric techniques, the right time-period and the right data set. So coherent theories are in fact in many ways more important than empirical evidence. Regardless of the dichotomy in economic debate there is at least one thing that would be in the economic bible and that is Free trade.

Free trade just means allowing products in and out of the country without imposing tariffs. The rationale for tariffs is that you want people to buy products that are made inside the country so that money stays in the country and develops the domestic industry. However from the consumer’s point of view, why should he/she care where the product is made? Since if even if the Japanese car industry thrives, they will export their great cars to your country and raise his/her standard of living. Whilst putting tariffs on their cars would either mean the consumer has to either accept a lower quality domestic car or accept to buy the Japanese car anyway at a higher cost, which either way reduces your standard of living because you either have an inferior product or because you have less dispensable income.

The part the Economics profession usually forgets is the politics aspect of tariffs. The reason you might not want to have free trade is if you believe instabilities between two countries will occur. You don’t want Japan to be providing all your cars for you because if they decide they no longer like you and want to go War with you, they can block all their products from coming into your country and helping you. Depending on how good the environment in the country is for setting up businesses, it is likely that either another country will provide the products or domestic industry will emerge, however if the Japanese industry was dominant chances are that your new supplier will not be as proficient, and depending on the product it could have devastating effects on the economy, such as if the product was food, that domestic workers tried to replace without enough expertise and end up selling toxic food.

So tariff’s are rationalized as a bargaining chip between governments, if you believe that relations with two countries are not likely to fall then then free trade is without doubt the route to go. This would probably be the US and Israel or Cyprus and Greece. So China making everything nowadays is not likely to be problematic, unless one believes that they might one day use it against other countries.