Friday, 3 July 2015

The confusing case of Eric Abetz

Eric on marriage equality


Following the SCOTUS marriage equality decision, there has been a lot of interest in the state of marriage equality in Australia. Many Liberal (read: not liberal at all) MPs have come out in favour of marriage equality or at least, a robust debate about marriage equality. But some amongst the National/Liberal coalition remain unconvinced. Some have expressed their objections in relatively respectful terms, avoiding inflammatory and false claims...and then there is Eric Abetz.

In case you have been living under a rock, or what is more likely, successfully managing to screen out the meaningless drivel of your elected leaders, Eric Abetz has done us all proud with the following objection to marriage equality:

"...legalising gay marriage would lead to polyamory"

Eric...the voice of reason

In the interests of calm, rational debate I decided that instead of dismissing this comment as the out of touch rantings of a conservative losing his grip on reality and progressively alienating the Australian public, I thought I would scientifically evaluate this claim. I mean, it should be pretty easy right? There are countries with legalised polygamy and countries with legalised same sex marriage. It shouldn't be too hard to establish if there is a correlation. And even though such a relationship wouldn't establish causality, it would certainly provide some rudimentary evidence to support the scientifically minded Eric Abetz fans out there...

So lets have a little dig shall we? 

The evidence

According to my research, there are twenty countries that allow same-sex marriage nationally; Netherlands, Belgium, Spain, Canada, South Africa, Norway, Sweden, Portugal, Iceland, Argentina, Denmark, France, Brazil, Uruguay, New Zealand, Britain, Luxembourg, Finland and Ireland (source:http://www.freedomtomarry.org/landscape/entry/c/international). Adding the USA to that list makes 21 countries and Mexico allows same-sex marriage regionally. So we could extend that to 22 countries, to really ramp up the statistical power. I want Abetz to feel confidant that this analysis has sufficient power to detect an effect. I'm sweet like that.

Now lets look at the list of countries that allow polyamory...or more accurately polygamy, since polyamory would just be multiple loves and I suspect that multiple legal spouses is the terrible consequence of the Gaypocalypse that Eric actually fears. 


Rainbows and comets falling from the skies...the Gaypocalypse has come

Anyhoo, here is the list of countries which legally administer polygamy: Iraq, Malawi, Libya, Namibia, and Uganda. Now, you may notice that there is a startling lack of correspondence between the first list and the second. But given that you (and Eric) may prefer your data in visual forms, I have attached a Venn diagram which summarises the situation.

"But" I hear you object, "this data is not germane to the question. What we are really interested in is whether same-sex marriage leads to polygamy." 

Okay, so lets consider the question temporally. Does the introduction of same-sex marriage lead to the later introduction of polygamy? To answer this, we need to consider the countries that have been administering same-sex marriage for the longest. The Netherlands introduced same-sex marriage on April 1, 2001. That has given them fourteen long years for the insidious threat of polygamy to slither its way into the mainstream. If any country was going to have fallen prey to Eric's nightmare scenario, surely it should be the Netherlands? I mean, they even have the word "nether" in their name, and we all know you can refer to hell as the Netherworld...coincidence? I think not. But a cursory examination of their laws reveals that in fact polygamous marriages are illegal in the Netherlands. Of course, samenlevingscontracts are an arrangement that can involve more than two people, but these are not considered to be equivalent to marriage, being merely a mechanism for governing property relationships between people who cohabit. In terms of evidence to justify fears of a causal relationship, I'd say this is pretty poor. One Abetz out of five.

Belgium is the next country to have introduced same-sex marriage on June 1, 2003. Again, that seems like more than enough time for the emergence of polygamy if indeed there was a causal link. But again, Belgium does not administer polygamous marriages. It also has a framework for administering property rights between cohabiting individuals wettelijk samenwonen / cohabitation légale but as in the Netherlands, this does not imply a sexual relationship at all. Once again, not looking great for Captain Traditional Marriage.

I could continue, but I think you get the point. In fact, not a single country which has introduced same-sex marriage has gone on to introduce polygamous marriage. In fact, contrary to the arguments of Eric Abetz, there is a strong overlap between countries that criminalise homosexuality and countries that allow polygamous marriage. If I was being facetious, I could suggest that tolerance towards homosexuality could be seen as protective against polygamy, but I won't.

There are in fact only two countries which currently permit same-sex marriage and recognise polygamous marriage; South Africa and the UK. For both these countries however recognition of polygamous unions preceded same-sex marriage. So the direction of causality, on the basis of this small sample size, would seem to be running in the other direction. However, there are far more countries which do not permit homosexuality and do permit polygamy. 

Taken together, I have to say that the evidence seems to go very strongly against Eric Abetz here. Countries which permit polygamy tend to criminalise or outlaw homosexuality and countries that permit same-sex marriage do not permit polygamous marriage. Being the sensible man that he is, I am sure that on perusing this evidence Eric Abetz will immediately realise that there is no justifiable reason to connect same-sex marriage to polygamy, and if anything, a strong reason to believe there is a negative association between the two. I am sure we will be seeing his Facebook profile pic going rainbow very soon. I'll be waiting with baited breath.


Rainbow Eric: Protector of Monogamy


Monday, 9 September 2013

Folk Psychological Theories about Dogs

Dog training is an area that is rife with theories and 'experts'. It is impressive the amount of nonsense that gets circulated when you consider that psychologists have been studying animal learning and behaviour since the 1960's. This branch of experimental psychology has some of the most replicated findings in the whole discipline. All of this means we have a very good idea how animals learn and how you can train them best. And yet the majority of people carry around some very odd notions about dogs and how they learn.

There kinds of beliefs are usually called intuitive folk theories and people have them about all sorts of things. Most people have intuitive psychological theories and intuitive theories about physics and statistics. These theories are best guesses and for lots of situations they work fine, but when you compare them to actual empirical science they tend to be horrendously wrong. Below I have listed some of those that in my opinion are the most misinformed and require immediate rebuttal.

1) Dogs are basically wolves. You need to be alpha and dominant otherwise your dog doesn't respect you and will never behave.

I'm a wolf. Obviously.

Dogs were domesticated between 20, 000 and 40,000 years ago. At this point, they started to take a totally different evolutionary path to their wolfy cousins. We selectively bred the wolves with the gentlest temperament and the least aggression. We killed pups that represented a danger to us. Wolves on the other hand started having to become less social and more aggressive to survive us with our newly gained weapon. So dogs become more and more docile and wild wolves become more aggressive and less sociable. Dogs learned to do things that no wolf does. Dogs watch human facial expressions very carefully. They have learned over the long years with us that our faces provide excellent clues about how humans are feeling. Wolves do not read human facial expressions. They have not had the benefit on learning how we express our emotional states. Dogs do have a social hierarchy but it is far less rigid and more nuanced than one dominant alpha and the remainder as subservient. Human beings also have social hierarchies but this doesn't mean we learn best by being dominated. There is a wonderful book called Dog Sense which is packed with empirical research on this topic and dispels the dogs = small wolves myth beautifully.

There are still some behavioural similarities between dogs and wolves in the much the same way that there are some behavioural similarities between humans and chimpanzees. But I don't think many people would seriously try to understand human politics by studying chimpanzee social interactions.

Now that I think about it....

There is a correlation between dog parents who molly coddle their dogs and bad behaviour, but that tends to be because they don't implement a consistent training program with their dogs. A macho, alpha trainer who believes in dominance is more likely to implement a consistent (potentially rigid) training program. But the good behaviour is caused by the existence of the consistent training program, not the hogwashy theory behind it.

2) There is one correct set of commands and signals to train/there is one correct way to reinforce behaviour

Many training schools will say that their commands are better because they work with natural body language or using a dog whistle is better because it sends a pure signal. There isn't a lot in these claims. There are some things that are generally true about animal learning. Reinforcement works better than punishment. A reward that is more desired works better than a reward that is less desired. Other than a few guidelines like this, animals can be trained to learn to do some pretty unnatural things in response to some unnatural cues. A rat can be taught to push a lever to dispense cocaine just by using conditioning principles. Lever pushing isn't a natural action for a rat and cocaine isn't a natural appetive stimulus for a rat and yet they can learn this with relative ease.

Just like momma used to make.

3) Dogs can feel guilt. A variant of "my dog knows when he has done something wrong".

Here is the scenario. You come home from work and puppy has weed on the carpet. You call him over in a gruff tone and point angrily at the carpet. He tucks his tail under his legs and starts skulking around. You know he knows what he has done. So you discipline him. Job done. He has learned a valuable lesson...right? So very wrong.  Your dog cannot learn an associated between an action that happened four hours ago and an outcome now. He can't even learn an association between something that happened four minutes ago and now. You need to reinforce or punish a behaviour within 5 seconds of it occurring or all your dog is learning is that you are an unpredictable weirdo.

"But he looked so guilty". Dogs are not capable of feeling guilt. Guilt is a very complex emotion that requires the ability to engage in counter factual thinking and perspective taking. That is, the dog would need to have the ability to think "I could have not peed on the carpet and peed on the grass instead" and "Even though I am happy to pee on the carpet my owner doesn't like it". Dogs do not have these cognitive abilities. Small children don't even have these abilities. Perspective taking in particular doesn't fully develop until around the age of seven.

I have complex cognitive skills that eclipse your three year old's. Or not.


So why does your doggy look 'guilty'? He is responding to your tone and body language and trying to appease you with his behaviours. This is how dogs say, "I'm sorry, don't hurt me." If you saw that wee and instead of getting stiff and angry said, "boo boo babykins" in an excited happy tone, your dog would waggle his tail and come over to be petted. So what have you taught your dog with this experience? You have taught him that you coming home is a bad thing. You have taught him that coming when you call is a bad thing. In short, you have not taught him what you intended and you have instead taught him two things you didn't intend to teach him.

"Alright then, how the hell am I suppose to teach my dog then?"
Well, the most effective way to teach your dog is through simple conditioning procedures. I will run through these briefly.

In the wonderful world of Skinner boxes and lever pushing rats things can be reinforcements or punishments and they can be negative and positive. And that, by and large, is it.

·         Reinforcement is anything that increases the prevalence of a behaviour
·         Punishment is anything that decreases the prevalence of a behaviour
·         Positive means I am giving/adding something

·         Negative means I am removing/subtracting something

To avoid confusion don't think of positive and negative as good and bad. Think about them as the mathematical symbols 'plus' and 'minus'. Positive is adding something. Negative is subtracting something.
Then we put them all together.

Positive reinforcement = When the dog performs a behaviour on cue I give the dog something to increase the likelihood of that behaviour occurring again. This needs to be something the dog likes (appetitive). An example is giving the dog a treat when he sits.

Negative reinforcement = When the dog performs a behaviour on cue and I take something away that will increase the likelihood of that behaviour. This needs to be something the dog doesn't like (aversive) and is motivated to avoid by producing the desired behaviour. Negative reinforcement doesn't get used very often in dog training. An example would be if the dog had rolled in something smelly and didn't like the odour. You put the dog in the bath and afterwards the bad smell is gone. The act of taking a bath has now been negatively reinforced because the bad smell that was present is gone.
It is important to remember that if you see a bad behaviour, introduce something negative into the situation (a loud noise) and then remove it when the bad behaviour stops this is NOT negative reinforcement.  This would be an example of positive punishment.

Positive punishment (punishment by applications): if the dog performs a behaviour and I do something to decrease the likelihood of the behaviour. This traditionally means adding something to the situation that the dog doesn’t like (aversive).  An example would be if you gave a sharp 'no' when the dog grabs at your shoe. The dog drops the shoe and is less likely grab it in the future because that behaviour is associated with something he doesn't like.

Negative punishment (punishment by removal): if the dog performs a behaviour and I take away something to this decreases the likelihood of the behaviour. Logically, this means you are taking away something the dog likes (appetitive stimulus). So if your dog jumps up on you and you turn around and face your back to him, this is negative punishment. You are removing something the dog likes (your attention) and this will decrease the likelihood of the puppy jumping up again.




A simple summary is given here:



Using these guidelines you can increase the behaviours you want to see in your dog and decrease the behaviours you don't want to see and without having to pee on your dog to assert yourself as alpha (the way wolves do). If you want to try peeing on your dog periodically and see how effectively that modified your dog's behaviour, you could. I will let the American Veterinary Society of Animal Behaviour have the last word here because they seem like guys who would know what they are talking about.

"Despite the fact that advances in behaviour research have modified our understanding of social hierarchies in wolves, many animal trainers continue to base their training methods on outdated perceptions of dominance theory. Dominance is defined as a relationship between individual animals that is established by force/aggression and submission, to determine who has priority access to multiple resources such as food, preferred resting spots, and mates (Bernstein 1981; Drews 1993). Most undesirable behaviours in our pets are not related to priority access to resources; rather, they are due to accidental rewarding of the undesirable behaviour. The AVSAB emphasizes that animal training, behaviour prevention strategies, and behaviour modification programs should follow the scientifically based guidelines of positive reinforcement, operant conditioning, classical conditioning, desensitization, and counter conditioning."

Sunday, 21 July 2013

Paper aeroplanes in space

This post was inspired by a conversation I had with my friend Nikki about unemployment. After discussing the painful process of job seeking, she composed a sonnet in honour of her desire to work in fast food.
O, trap of grease
How you both fill and fulfill me
Thine stench and texture
Both excite me and envelope me with desire

Of course it was all very amusing, but it got me to thinking about how vile the process really is and hence, this post was born. As an ode to the odious process of job seeking, I have compiled a list of the four worst things about job seeking and unemployment.

Sending a job application is like throwing a paper aeroplane into the void of space and hoping it will crash land on an inhabited planet. You know when you write one that the chances of anybody reading it are vanishingly small, yet you still must spend hours hatefully crafting your work experience into action oriented sentences liberally peppered with offensively bland key words. You hope your flimsy creation will navigate itself through the minefield that is the key word seeking software. This software is like a rapacious, mindless beast, happily rooting for the truffles that are your words. Of course, everyone else's resumes are likewise splattered with these meaningless pieces of jargon so the truffles are more like undergraduate degrees, too common to be interesting to, or even acknowledged by most HR professionals. Which leads to depressing fact the first; It has been estimated that 75% of applications will not be acknowledged in anyway.

Your application has been successfully launched into the empty vacuum of space

Now I will acknowledge that unlike the majority of the statistics I provide on this site, I do not have an academically rigourous source for this statistic. The Googling I did however, kept coming up with this figure and it tallies pretty well with my own experience, so I invite you to think about whether it tallies well with yours as well. Considering that with an electronic application system it takes exactly zero effort to acknowledge you application and send out a generic email when the position has been filled, I think there is no excuse for this egregious breach of basic human etiquette. I may not be your ideal candidate, but if I have spent 3 or 4 hours on an application for your company, the least you can do is set up an automatic notification system to let me know I am going to continue to live on baked beans for at least another week. Psychologically speaking, this whole process of sending a resume and getting no response is totally non-reinforcing. In human and animal learning, if a behaviour (say posting a Facebook status) is consistently paired with something you like (Facebook likes), this will increase the likelihood of your performing the action in the future. But if you perform a behaviour and receive nothing at all consistently, you stop doing it. Even worse, people hate being ignored. It is why ostracising is such an effective social punishment. So hearing nothing becomes a punishment. The behaviour of submitting a resume actually starts becoming associated with a punishment, making you even less likely to submit resumes in the future. That is why studies have consistently found that the longer you are unemployed, the fewer job applications you do. This total lack of basic human etiquette is actually damaging to psychological well being and motivation.

Which brings us neatly to the second depressing reality about job seeking and unemployment. Long term unemployment is extremely bad for your psychological and physical well being. There are numerous studies showing that long term unemployment is associated with increased incidence of major depressive disorder, anxiety, psychosomatic symptoms, low subjective well-being and poor self-esteem (http://www.sciencedirect.com/science/article/pii/S0001879109000037http://www.academia.edu/1213745/Associations_between_unemployment_and_major_depressive_disorder_evidence_from_an_international_prospective_study_the_predict_cohort_). They are also six times more likely to commit suicide ((Bartley et al, 2005) and one author has estimated that the effect of being long term unemployed is equivalent to smoking ten packs of cigarettes a day (Ross 1995). Stop and let that one sink in for a while. Unsuccessful job seeking is actually toxic. Of course, the correlation runs the other way too, with the unemployed more likely to engage in unhealthy behaviours (Waddell and Burton 2006). As Mansel Aylward, director of the Centre for Psychosocial and Disability Research at Cardiff University so elegantly said "“Sickness and disability are among the main threats to a full and happy life; work incapacity has the most significant impact on individual, the family, economy and society."

I love you sweet desk job


Of course to get a job, you must go through the process of applying for a job, which brings us full circle and to the third depressing reality about job seeking an unemployment: Resumes are written in a totally revolting and inauthentic style. There aren't many things as depressing as pretending to be enthusiastic about things no sane human could ever have a legitimate interest in.  Could anyone ever write the following sentence “I am passionately dedicated to statistical analysis and generating actionable insights” without wanting to punch themselves in the nose? I wrote that gem myself and felt dirty afterwards. The language you have to use in crafting these masterpieces of doublespeak. Resumes and cover letters require a revolting stylistic mix of self-aggrandisement and slavish servility that most people rightly feel is artificial and ridiculous. There is nothing authentic about the way you express yourself in a resume, right down to the bizarre syntax you have to use to being every sentence with an action word. Here are a few choice extracts from my resume:
• Pioneered several bespoke statistical calculators for use by all analysts
• Created innovative analytic technique for analysing data for a major medical device company
• Successfully presented results to a variety of international clients
Reading this makes me hate myself. It makes me hate anyone who could hire me after reading it.

This collection of words may be single handedly responsible for preventing intelligent aliens contacting earth

The final depression reality about the job seeking process is probably the most depressing one on this page. The system of resumes and cover letters can be as effective in picking the correct candidate as picking at random. That’s correct; depending on how you conduct the process, you are as likely to pick a good employee by flipping a coin or throwing a dart at a bunch of resumes stuck to the wall, as using the cumbersome HR machine. This finding comes from a meta-analysis examining the efficacy of a number of inputs into the selection process (Schmidt, F. L. & Hunter, J. E . (1998). The validity and utility of selection methods in personnel psychology: Practical and theoretical implications of 85 years of research findings. Psychological Bulletin,  pp. 262-274. As an aside, a meta-analysis is a sophisticated statistical technique that uses studies as individual points of data instead of individual people as a standard experimental study would. They are an extremely powerful technique that can give a good estimate of the real size of an effect in the general population.
This study of studies found that out of 20 possible information sources, the information contained in your resume including reference checks, job experience, years of education and interests ranked 13th, 14th, 16th and 17th at predicting job performance. The only things worse were your interests, graphology (the defunct science of analysing your handwriting) and your age. The best predictors were work sample tests (performing a test run of the main tasks you will be required to perform) and your general intelligence. Structured employment interviews came in third, with unstructured coming in 9th. In an ideal scenario, employers are picking the 13th-17th best predictors of job performance to whittle down a list of potential candidates, and the 3rd (or 9th) best predictors to make the final decision. You could set up a psychic hot line recruitment business and perform as well as the standard process for employee selection. On second thoughts, forget that, I have a brilliant idea for a new business.

Mirror, mirror on the wall, who should be my PA?







Monday, 8 July 2013

Beware of magic numbers

Every so often social media erupts with a new, numeric meme. This magic number purports to summarise in one easy, often round integer, some universal truth. The classic example is the “You only use 10% of your brain, we have all this untapped potential” hokum that you often hear regurgitated. My most recent numeric meme experience was “you have 35,000 thoughts a day”.
My Facebook and Twitter feeds started trending wit this claim and because I am a scientist, a data nerd and a profound killjoy, I decided I would investigate this little snippet. There are a couple of problems with any claim like this one;

1. How is a thought operationalized? In sciences like psychology, there aren't universal definitions the way there are in other sciences. For example, if I made the claim that on average people are 9 ft tall, you could whip out your ruler, gather a random sample of people and get measuring. You and I would have the same definition of one foot (i.e. 30.48 cm) and you would easily be able to demonstrate that my number is bogus. Since we all have the same rulers a claim like this is easier to assess objectively.

A thought however, is not an standardised unit. There aren't even very good definitions of a thought, much less guides on how to determine one discrete thought from another.
Wikipedia gives us this helpfully circular definition:
“Thought can refer to the ideas or arrangements of ideas that result from thinking, the act of producing thoughts, or the process of producing thoughts. In spite of the fact that thought is a fundamental human activity familiar to everyone, there is no generally accepted agreement as to what thought is or how it is created.”
What Wikipedia has generally correct, is that there is no generally accepted agreement on what a thought is. Certainly in cognitive psychology we wouldn't use the term ‘thought’ as an operational term, because it is too vague and unspecified.
Additionally, thoughts aren’t really discrete units like centimetres or feet. Human minds work using a kind chaotic spread of activation. What this means is that the activation of one idea spreads to semantically linked thoughts and concepts. This is why you can start talking about work, remember you had a coffee at work, think that you like coffee a lot more than tea, and start talking about tea in the next breath. Consider that chain of thoughts for a minute. How many thoughts is that? Is that one big thought about coffee, your experience of it today and your rating of it relative to tea? Or is it three thoughts, one for each concept? What about the sub-threshold thoughts that helped you link all those ideas together?

2. What is the source of the claim? If I am going to go around repeating an empirical claim, I want to make sure that the claim actually has some evidence to back it up. So the next step in my over-analysing of this popular meme was to look for the source of the claim. A fairly rudimentary search on Google Scholar confirmed my suspicions. There was definitely no published empirical work asserting that we have 35,000 thoughts a day, or any other specific number either. The closest I could come to was a nutrition study that aimed to measure how many diet related thoughts we have a day. Even this study, with its relatively limited scope had great difficulty in defining what a single food related thought was, and they essentially just ended up asking participants to mark down in a diary every time they thought about food. This approach would not really work for measuring how many thoughts of any kind we have in a day.

Imagine me asking you to write down in a journal every time you had a thought. The very act of thinking "I better write down all my thoughts today" is a thought. Even if you sat basically catatonic for the whole day, if you remembered the task at all, you would sit there tallying the whole day. Apart from being immensely impractical that kind of a study wouldn't tell us very much about the act of thinking in a natural setting.

Having established that there was no published empirical source, I kept digging. At this point, serious alarm bells started ringing. Not only could I not find any empirical source, I couldn't find any primary source of any kind. It seemed like this meme had just sprung into life of its own accord, infecting social media like a virus. Finally, after a few hours of digging, I found THE source. It was a promotional info-graphic which made a number of empirical claims, amongst them our 35,000 thought cap. Here it is for your viewing pleasure (and a link for those who are truly curious).




If you look bottom, you will see, in tiny pink writing, a handful of references. At this point I got excited and thought that maybe I had misjudged the magic number. Maybe there was some science behind the hype. Or not.

The first reference was http://sourcesofinsight.com/10-ways-to-defeat-decision-fatigue/, a self-help website article on the ways to help defeat decision fatigue. It outlined a list of fairly sensible time saving techniques like making lists and setting time limits on tasks. All very practical but sadly lacking in magical numbers. Certainly nothing worthy of meme-ification.

The second reference was http://seekingalpha.com/instablog/698556-jason-matias/377101-decision-fatigue-what-investors-should-know-about-the-science-of-decision-making, a blog post on an investment website which featured some very interesting and totally unsubstantiated claims like this gem “Ninety-five percent of the choices we make every day are managed autonomously by the filters we have developed though our interaction with the world and society. The other five percent of the decisions we make are made by our active self: our ego.” This article was full of magic numbers and certainly some of them might be meme-ifiable but sadly, my magic number was nowhere to be seen.

The next reference was http://www.sciencedaily.com/articles/a/amygdala.htm This was a link to a page describing the anatomy and function of the amygdala (it is involved in the processing of emotions, amongst other things). This was certainly a very science heavy link, giving some neuroanatomical authority to the claim, but was unfortunately also totally unrelated to the magic number.

Just when I starting to lose all hope I found this http://powerofpositivity.net/?p=349. I saw the number 35,000. I got very excited. My data-philic heart started to pound. The reference was a blog post by basketball coach, trainer, author and husband, Cornell Thomas and the key line, the source of all this meme excitement of social media had been found.

“A survey I read said the average adult makes about 35,000 decisions per day.”

And that was it. No reference. No link. No nothing, just this one sentence. Now it might be true that Cornell Thomas read just such a survey. Unfortunately without the source, this claim has all the empirical strength of a Cosmopolitan personality test.

To make matters worse, the info-graphic itself was part of a press release promoting….you guessed it, a decision making app. (For those who are interested: https://www.easilydo.com/). Hours of my time, thousands of Twitter and Facebook mentions, and this empirical claim is reducible to a marketing ploy. 

There is a moral in all of this of course, other than the fact that I probably need to get a hobby. When you hear any kind of empirical claim, don’t take it at face value. Don’t assume that it is correct, that is has been researched, or that the speaker even knows whether it is true or not. This is especially important now as Australia rolls into election time and politicians start flinging around “facts” like these ones.

“Most asylum seekers arriving by boat are economic migrants and on some boats 100% of the asylum seekers are in fact economic migrants and not genuine refugees.” - Bob Carr June 2013 (One of many sources)

This despite the fact that government records demonstrate that more than 90% of asylum seekers arriving by boat are found to be genuine refugees (Asylum seeker facts). Five minutes of research, and I mean easy Google research, can disprove many claims like this, but most people will never even take the trouble to fact check for themselves. This willful ignorance only allows politicians and other people to get away with all kinds of unsubstantiated claims.

We have the capacity for so many thoughts a day (although not necessarily 35,000), let’s use some of them critically.

 P.S At the time of writing this a new thought meme is circulating. Apparently 70,000 is the new 35,000.

Wednesday, 3 July 2013

Welcome, featuring thoughts on categorisation, equality and blank slate activism

Welcome to Science: Good Bad and Bogus, a name which I freely admit I borrowed from a very interesting History and Philosophy of Science course I took in second year university (thanks Dr. Slezak). The central goal of this course was to solve the demarcation problem, which for the uninitiated, is the problem of defining science. How do we carve up all the investigative disciplines into Science and Non-Science? This, like many other fascinating philosophical discussions, is essentially a categorisation problem. As human beings we have an innate and generally very adaptive desire to categorise the world into groups of A's and B's. We run into trouble of course, when the world isn't as neatly demarcated as the categories we are attempting to fit it into. We are especially bad at the boundaries of categories. Consider the very emotional abortion debate, which centres on when something is "alive" or has "personhood". In other words, when does an entity move from the category "Biological Mass" into the category "Human Being"? Nobody in the pro-life camp believes that all biological masses deserve moral consideration and nobody in the pro-choice camp believes that all human beings should be killed at whim. The disagreement comes when we try to agree on the category boundaries. 

This introduction nicely provides the background for the main topic of this post, which would be a coincidence too amazing to be believed, if I hadn't planned it that way. In recent adventures on the internet (especially on the forum I am a member of) there has been lots of discussion around equality. Naturally this thorny topic provides a great deal of discussion that can essentially be reduced to discussions of categories of people. Which categories are valid for making judgments upon? Which categories are even real? Should we be discriminating on the basis of categories at all? Interestingly, in the context of categorisation, discrimination just means learning to differentiate between members of different categories. Generally, we learn to discriminate based on features that are most predictive of differences between the categories. That is, we pick the features that most reliably signal category membership. And we tend to be very good at it. It isn't very often that we latch on to a feature that is totally non-predictive and persevere in using it (we do sometimes pick a feature that is less predictive, but correlated with, another more important feature).

With that in mind, is biological sex a useful predictive category on which to make judgments? Can gender equality be achieved whilst acknowledging there may be differences between sexes*? 
The debate centres on the following points:
1) Is sex a real (biologically extant) category?
2) Is sex a useful (predictive) category?
3) Is sex distinct from gender? I won't really be addressing this point, in this particular post.

Arguments around gender equality have typically centred on the first two points. It is articulated as follows; sex isn't a biological reality at all and doesn't predict any reliable differences (behaviourally, psychologically or otherwise). There is no sex; we are all the same, Q.E.D. equality. Leaving aside the fact that I believe I can empirically demonstrate that the first two points are incorrect, deciding that we are equal because we are identical is a disturbing and dangerous assertion. It is essentially placing the cause of equality on the incorrect notion that we are all born tabula rasa (blank slates). We are all identical to begin with; therefore we are all equal in our potential. The problem with this argument is twofold:
1) It is flatly contradictory to the pro-equality mantra that diversity is a wonderful, enriching thing (which I wholeheartedly believe). We can't be diverse if we are all identical.
2) It places equality on very unsteady ground, because anytime anyone demonstrates that there are in fact differences between people, it is seemingly falsified. 

People aren't entitled to equal treatment because we are all carbon copies of each other. We differ in numerous beautiful ways (height, weight, sex, intelligence, neuroticism, extraversion...) and many of these differences are very appropriately used to determine our suitability for jobs, partners, and hobbies. People are entitled to equal treatment because at the most important level, that of humanity, we are the same. We are all members of this superordinate category and we have decided together that membership means a right to certain standards of life. We might one day even extend this category to include members of other species, but sharing this high level category doesn't obliterate the important within category diversity. The reality is that like with all of our other categories, the world is far messier and shaded that the categories contained within our minds. Evidence suggests we can train ourselves into forming fuzzy categories with semi-permeable boundaries and these sort of categories hold out the best hope for us understanding and thinking about thorny issues. 

Since categorisation is an inescapable facet of human cognition that is often extremely useful, it seems perverse to force ourselves to completely ignore them. Instead, we ought to acknowledge that in the absence of all other information, they provide a statistical 'best-guess' of what a person might be like. This can, and indeed should, be refined as new and more person specific information comes to light. It is only when categories are overgeneralised and impervious to disconfirmation that they become stereotypes.

For the final say on this topic, I hand over to Steven Pinker, whose ideas have been instrumental in helping me form my own

"As many people have pointed out, commitment to political equality is not an empirical claim that all people are clones."

                                - Steven Pinker
 

*Whenever I talk about differences, I mean at an aggregate level. There are for example reliable differences in the average height of men and women, with men being statistically significantly taller on average. This doesn't mean an individual woman can't be taller than an individual man. This will always hold true for any differences discussed.