Tuesday, April 30, 2013

To infinity and beyond!

When you have an opportunity, ask a child if he wants to win one or two candies.*
If the child is rational, it seems quite obvious that he will take 2. "More is better", he would think.

This kind of reasoning doesn't hold in a special world; the magical world of infinites.
When we come into this subject any basic arithmetic is useless.

Georg Cantor left many insights to humankind about the conundrums of infinite. He once said, shocked with his own discoveries: "I see it, but I do not believe it". And unsatisfied with a lack of understanding of infinite, he came with the idea that there is an infinity of infinites. Yes, you read it right, an infinity of infinites.

In order to give you a simple example of how our basic arithmetic doesn't work in this strange world, let's take two sets:

E = { 2, 4, 6, ... }
N = { 1, 2, 3, 4, 5, 6, ... }

Which one has more elements, N or E?

The common reasoning is to think that E has fewer elements than N because E is a subset of N; a valid thought if they were finite sets. But here, our intuition doesn't suit. 

N and E, surprisingly, have the same number of elements! In other words, the number of natural numbers is equal to the number of even numbers!
But, but...how can it be?!

The key to understand this idea is one-to-one correspondence. An even number always has a half representation in the natural numbers. If we map all the even numbers to their half representations, we can check that for each element in set E there is an element in the set N. Hence, they have the same number of elements, or technically speaking, the same cardinality.

Strikingly simple as that, and at the same time mysterious. 

Despite all Cantor's et al endeavours, we still have a lot more to learn from 

Till the next post!
Ronald Kaiser

* Please, don't do that if you do not have 2 candies in hand, =P

Monday, April 29, 2013

Programmers are NOT necessarily bad graphic designers

Today I want to share with you something that upsets me.
It is popular in the tech world the opinion that a programmer can't do graphic design [1] [2].

It is true that there are good programmers that are bad graphic designers. However, doing a blind conclusion about it is a logical nonsense, a non sequitur.

It is well known that analytical/logical thinking takes place on the left-side of the brain, while creative activities -- such as drawing -- on the right-side. This is the main argument to support the statement that programmers can't be good planning an UI (User Interface), for example.

Given that, saying a programmer is not good at graphic designing is similar to saying that a programmer is not capable of using all of his brain, what I see as an insult. It seems that is cooler to practice just one side of the brain.

What is more shocking for me is that some programmers accept that as a fact.
And most of them don't even try.

Realistically speaking, of course, in a typical daily routine of a tech/IT department there is always a lot of work to do and it is more convenient and wise to let the designers do this kind of work.

But, IMHO, it is unwise to say that programmers are not good designers.

Thanks for reading!

Till the next post,
Ronald Kaiser

Sunday, April 28, 2013

Physics in a song?!

Hello readers!

I found this song searching for a new band to listen in grooveshark. Fortunately, I tried "Particles" played by Greenland is Melting and I decided to share it with you.

Hope you like it!

This was a short one, till the next!
Ronald Kaiser

Friday, April 26, 2013

Buffon's needle simulation

Hello readers!
Have you ever heard about the Buffon's needle?
This is a very interesting experiment. In a nutshell, it is a fun approximative method to calculate the value of π. 

Watch this video to understand what is going on:

After playing for 20 minutes I came with a simulation in python and I thought I could share it with you. Please, be nice. It was a quick hacking, ;)

The code:

import sys
import math
import random

def get_random(l, k):
    return random.random()*(l - 2*k) + k

def get_point(w, h, k):
    return (get_random(w, k), get_random(h, k))

def get_angle():
    return random.random()*2*math.pi

def intercept(p1, p2, h, k):
    for line in range(0, h+1, k):
        if (line >= p1[1] and line <= p2[1]) or \
           (line <= p1[1] and line >= p2[1]):
            return True

def drop_and_check(w, h, k):
    p1 = get_point(w, h, k)
    angle = get_angle()
    p2 = (p1[0] + (k/2.0)*math.cos(angle), 
          p1[1] + (k/2.0)*math.sin(angle))
    return intercept(p1, p2, h, k)

def repeat(times):
    w = h = 1000
    k = 10
    count = 0
    for i in range(times):
        if drop_and_check(w, h, k): count += 1
    return float(times)/count

if __name__ == "__main__":

It is available via github too.

Hope you liked it! =)

Till the next post!
Ronald Kaiser

Absolute value

Hello readers!

This is my second comic experiment. Now, in a xkcd style. Hope your elementary math is ok, ;)

Leave a comment if it made you laugh, ;)

Till the next post!
Ronald Kaiser

Friday, April 19, 2013

How and what we observe?

Hello, readers!
Today I'd like to invite you to make an experiment.
Look at the image below.

Now, say out loud what you've just saw and observed. [1]
Please, don't skip this process and do not start to imagine how this is stupid. I think this exercise is going to be extremely important to have a grasp of the whole point of this post.

What did you observe?
There are many ways you could observe it. To list a few:
  1. A table;
  2. A photo of a table;
  3. A 3d model of a real table;
  4. A particular composition of woods that looks like a table. Since a table is just an idea (a concept), I'm not looking at an idea;
  5. your answer...
If you are a computer scientist (like me) you could answer something like "I can observe a sequence of pixels, since this is a definition of an image and I'm looking at an image". Evidently, there is no correct answer to the question.

The reason why there is no right answer is because the question was too broad. No scope was defined. No contexts. No directions. No theories. [2]

The importance of theories

Karl R. Popper proved in his book: 'Conjectures and Refutations' that there was no way Isaac Newton could devise his laws from pure observation of the world. This is a strong assertive and I suggest you to read the proof if you are interested in more details.

Popper believed that you cannot successfully reach a general theory exclusively through observations. The point is we have to have a system of references to make an observation. It's like a guide in what perspective to use.

Thus, in order to answer the table-observation-problem, you picked out one theory from your experience, probably unconsciously, and thought something like: "I'm gonna look with the eyes of a Platonist (through Platonic ideas), so I'm gonna answer like 4: "A particular composition of woods...".
And not the other way, from observation to theory. [3]

Few words on big data

Today, the term "big data" is a well known buzzword. But what people mean with "doing" big data?
Doing big data, as far as I have seen and known, is trying to fit the data to one of our human theories. The most used are statistics and machine learning. In this particular case of big data, we clearly make a conscious decision of what theory to use to support our observations.

And maybe we should use other theories to look at it...

Till the next post!
Ronald Kaiser

[1] Feel free to comment in this post your answer, ;)
[2] Nevertheless, someone would frown if you say "I can see my grandma through this table!".
[3] Please, don't mess the words look and observe.

Tuesday, April 16, 2013

On making mistakes

Hello readers!
Today I'm gonna talk briefly about making mistakes.

We, human beings, usually regret ourselves on making mistakes. Avoiding them, undoubtedly, has been an advantage in an evolutionary point of view. Imagine a prehistoric man hunting in a savana. When he stumbled upon a rock while trying to catch a lion he was punished for his mistake with his life.

The work of Karl R. Popper, which I'm reading a lot nowadays, delineates another standpoint about errors, or perhaps, I should say, a discussion in another level.

Popper believed that the only way to get closer to truth (if it exists...) was to trust in reason and be critical. Thus, according to him, making mistakes is not just inevitable, but very necessary to improve our understanding about the world. In few words this is what his critical rationalism is all about.

Similar ideas are pointed out by other 'successful' people:

"You say you want innovation...If you're serious about this, you need to celebrate and promote failures" - starts at 10'47''.

"Let's forget about avoiding mistakes...We want to limit our ability to make mistakes. Making mistakes is like a crime. No! It's a normal part of the thinking process." - starts at 33'30''.

When Irene Adler is in scene you can observe that Sherlock Holmes makes more mistakes. And as far as I know, it is an invariant in all Sherlock Holmes movies/series. Watch this video:

In this particular scene, Holmes is easily poisoned. Immediately after being poisoned, he still strives to understand how he was deceived, mentally reconstructing Adler's poisoning strategy. In a nutshell, learning from his mistake.

So, next time you make a mistake, laugh. It probably means that you are trying something new. And most importantly, learn with it!

Till the next time, readers!
Ronald Kaiser