Archive for August, 2008

Basic Monte Carlo

August 22, 2008

I’ve started looking at Monte Carlo methods recently, just for fun. Monte Carlo is a method that attempts to give a probabilistic solution to complex problems. Its premise is that by using random numbers, you can get an idea of how the solution looks like.
The most basic example that I’ve tried is to calculate the value of PI. The story goes like this…

You take a circle circumscribed by a square. Something like a darts board. Imagine now that you throw darts at it randomly. Some of the darts will land within the circle, some others will land in the square but outside the circle. The probabilities of these 2 events are proportional to the areas of the respective areas.

Here’s a good explanation of the theory and the formula:

A quick an dirty implementation that I came up with in 10 minutes in Python:

import random
from math import sqrt

radius = 100
hits_total = 0
hits_within = 0
decimals = 4
good_enough_pi = 3.1415926535897932384
approximate_pi = 0

def calculate():
    x = random.randint(0, radius)
    y = random.randint(0, radius)
    global hits_total, hits_within
    hits_total += 1
    if sqrt(x*x + y*y) <= radius:
    hits_within += 1

while(int((approximate_pi - good_enough_pi)
                  * (10 ** decimals)) != 0):
    approximate_pi = 4 * ( float(hits_within) / float(hits_total) )

print "Calculating PI with 2 decimals took %d tries" % hits_total
print "PI~=%s" % str(approximate_pi)

When running this algo, I’ve seen that perhaps 3 out 4 times it runs it completes really quickly for 3/4 decimals. The other one time it takes ages and it doesn’t converge. So in that sense is not reliable. It does converge but it can take millions of tries.
Not to mention trying to get more decimals.

Here I’m just comparing the approximate PI with a pre-calculated PI, but imagine applying Monte Carlo to problems when you can’t pre-calculate (real problems in other words). Well, the problem is that it is difficult to know when your result is good enough and you can stop.

Maybe you can use another Monte Carlo method to determine the probability of a Monte Carlo method to be reliable and converge quickly. Applying Monte Carlo methods recursively…

As I said, basic stuff…


Python and accidental function overloading

August 21, 2008

Today I discovered one dark side of Python. If you accidentally have two functions with the same name, then the last one is considered, and the first one is like it never existed. There is no overloading based on parameters (I was about to say types :-). Python will just call the second function (second as it appears in the source code). If the number of parameters don’t match, you will get a runtime error, but if the number of parameters match, it’ll be executed silently.
My accidental overloading resulted from the following sequence of events:

1. I had a function DoOperation
2. At a later date I realized I need to break that into 2
where DoOperation still does the main processing, but DoOperationAck sends the result or acknowledgement to whoever is interested in the result.
The only problem is I had forgotten to rename, so I ended up with 2 DoOperation(s).

While this was a programming defect I’ve introduced, Python surely didn’t make it easy for me to catch that.


Activity based planning

August 14, 2008

I’m reading Microsoft Secrets, an old book about Microsoft before 1995. It’s still a very interesting book as it looks back at some of the Microsoft practices for product development. One problem Microsoft had back then was how to select the features that should go into a product. Developers wanted some features that they liked and generally, whoever shouted the loudest got their features in. It was pretty much developer driven and a lot of it was copied from other products (yeap, Microsoft weren’t much of innovators).

The system wasn’t very good, so they finally discovered a better one, called activity based planning. This method started with the user in mind and asked questions like: how is the user using the software, what is the problem that the user is trying to solve. For instance … creating a budgeting system in Excel. This would be a big activity, which then can be broken down into smaller activities.

Once an activity is identified, it is mapped to technical features. These technical features can support multiple activities. For instance, loading images from the disk, or sharing components via OLE. These are technical features that ultimately can support multiple activities.
Activity to feature mapping is therefore a many-to-many mapping.
In the end, the features that supported more activities made it into the product whereas the ones that couldn’t be mapped to an activity were generally scrapped. It was a good principle to use when rejecting features as they usually had more features than they could implement.

This change in perspective helped in many ways. First the product became more aligned to what the customers really wanted, which is always the killer feature of a product. Second, the product management became a more important contributor to product development by giving a sense of direction to the product and saying … this activity is really important focus on that, or this activity is not that important. Product management could now simply interview the people and get feedback specifically targeted at the activities that users were conducting with the software.
But there were improvements that came “for free”. And one was that people wrote more generic code. The rationale was … I can change this feature I’m doing to support an additional activity with only minimal changes to the code. Supporting more activities was in fact a ticket into the product.

These are lessons that every company should learn from history and from Microsoft.

any variant?

August 11, 2008

Array in C and std::vector in C++ provide contiguous memory for elements of the same type (and implicitly the same size). These two properties are extremely important and because of them, one can index in an array/vector like this:

arr[4] = 10;

That means:
a. lookup the forth element in the array – that’s a arr + 4 * size_of_the_element
b. set the found location to the new value 10

As such, the size of the element needs to be the same for all elements for this to work.
Needless to say this indexing is an important optimization as it’s a quick lookup operation.

However, many times the need for working with heterogenous elements (different type/size) in the same array/vector arises. While the array/vector doesn’t support that natively, two solutions exist that allow us to do just that.

The first is using unions. Unions are C/C++ data structures that can hold different data types with the size of the union being the size of the biggest element. Example:

union U
    char c;
    int  i;
    void* ptr;

The size of the structure will be the max(sizeof(char), sizeof(i), sizeof(void*)). On 32 bit platforms that will be 4 bytes.
This approach allows you to set any of the components of the union individually but internally you will be referring to the same memory location.
Unions are used to implement the VARIANT type in Windows and other variant types. I remember I implemented a CVariant class about 6-7 years ago when I wrote a GUI library/framework for WinCE.
The catch here is that a union doesn’t tell you which component is ‘valid’, so type information needs to be stored in the Variant class too (which loses some of that static-type compiler ‘intelligence’). You will have to programmatically check the type and do stuff depending on the type.

The other approach is to use pointers. Pointers to objects have the same size irrespective of the type of the object. However, these pointers still need to have the same type in order to use them in containers. A standard C approach would be to use void*, but that’s just too error prone and very much not-C++.
In C++, an approach would be to have a base class and then a templated class that derives from it. The base class cannot be templated, as that would generate different types which would make it impossible to use in an array/vector. For example:

class base
    virtual base* clone() = 0;

template <class T>
class deriv : public base
     deriv(const T& t);
     virtual deriv<T>* clone() = 0;

When you construct such a pointer, you use the deriv class with the actual type, however, when you place it in the container, you place it as a base pointer. As such you’re container will look like this:

std::vector<base*> container;

Alternatively, you can just use boost::any which implements this scheme to give you any type. Unlike the union solution, this second solution allocates instances on the heap/free store. The union can store instances on the stack or on the heap. The limitation that comes with that is that the union can only hold the types that are in the union (in our example: char, int and void* only). boost::any on the other side can host any type.


August 7, 2008

One thing that caught me out recently (and not just me from what google tells me) is Python’s treating of empty data structures as false. A sort of NULL pointers in C. Coming from a C/C++ background, I love writing code like this:

def func(param):
if param:
print “param is not empty”
print “param is empty”

param can be an empty dictionary {}, an empty list [], an empty tuple, or an empty string “”. I love this syntax.
Python treats these empty data structures as False.
However, Python also treats None like False. None is possibly closer to what NULL is in C.

The only problem is when you write code where you need to distinguish between 3 states:
a. non-empty data structure
b. empty data structure
c. not found (or any other error)

Solving this sort of puzzles doesn’t map very well to a boolean True/False representation. Below there’s some code that I wrote recently and got me into trouble. To preserve anonymity, I’ve changed the names of the main actors 🙂


mobile_number = person.get(“mobile_number”, None)
if mobile_number:
print “Mobile number is:”, mobile_number
print “No mobile number?!”


Assuming person is a dictionary, this line of code tries to get the value for the “mobile_number” key, or, if the key doesn’t exist, return None. Pretty decent, until you realize that you cannot distinguish between the case where the mobile_number key is missing or the mobile_number is the empty string.

Maybe in this case it’s not important to distinguish between the None value or the empty string. But you can surely imagine a case where an empty string would be a valid value, but not a missing key.

The solution is to explicitly check for None as in the example below:


mobile_number = person.get(“mobile_number”, None)
if mobile_number is None:
print “Mobile number is missing”
elif mobile_number:
print “Mobile number is:”, mobile_number
print “Mobile number is empty”



August 4, 2008

I’ve discovered LibraryThing today. It’s a website where you display the books you’re reading and you can see who else is reading the same thing and what else they’re reading. It’s a good idea for discovering new books to read. Similar in a way to Amazon’s recommendations. I’ve only used it for 30 mins, so I haven’t got yet the ‘feel’ of it.


August 1, 2008

A while ago I finished reading The Bond King: Investment Secrets from PIMCO’s Bill Gross. I didn’t find the book too exciting. It’s written in a bombastic style I thought, which I should have expected from the title. It’s still interesting to find out about Bill Gross and some of his investment strategies. But at points, I thought the book is hard selling some of the funds that PIMCO manages. Perhaps more useful is PIMCO’s website where they have some fixed income tutorials. I read one or two and they are very good.

After that, I offered myself a break, switched my brain off and read one of Michael Connelly’s novel called Black Ice. It’s a story about a police detective tracking down several murders related to a new drug called Black Ice. It took me 2 days to read. Brain gargle.

And now, I moved on to an older book Microsoft Secrets: How the World’s Most Powerful Company Creates Technology, Shapes Markets and Manages People by Michael A. Cusamano. It’s an old book that documents Microsoft’s practices, development methods culture from the ’80s until 1995 when the book was published. So it doesn’t include recent developments.
I read it because:
1. It’s still a good lesson in company organisation – the good, the bad and the ugly, learning from mistakes and avoid repeating history
2. I’m particularly interested in their product lifecycle – identifying good ideas, implementing them, selling them – I’m interested in that entrepreneurial spirit
3. It’s interesting reading about how they discovered some of the established best practices in software engineering. Today we perhaps take them for granted, but it’s good to learn/remember why they are best practices in the first place.
4. It’s a good history lesson

Yesterday, I’ve ordered The Accidental Investment Banker on Amazon so I guess that will show up on my list soon. Although Microsoft Secrets is a thick book that will keep me busy for a while.


August 1, 2008

The links below open in a new browser.

Firefox market share exceeds 20%, Internet Explorer dips below 70%

Building the Web 2.0 Enterprise: McKinsey Global Survey Results – how enterprises use Web 2.0

The A-Z of Programming Languages: JavaScript – Interview with Brendan Eich
Brendan Eich created JavaScript in 1995 with the aim to provide a “glue language” for Web designers and part time programmers. It has grown to become one of the most widely used languages on the planet.;243672124;pp;1

Some amazing games in Javascript

Processing in Javascript – amazing stuff (most of it only works in Firefox3)