Easteregg on youtube, the classic snake game

Here is a classic, a funny easteregg on youtube. When you vistit a random video, pauze the video at 0:00, press the right arrow on your keyboard and keep it pressed. Now while keeping the right key pressed, press down as well and a "snake game" will be presented.

Like this:

(arggg, i say left when I meant down in this video)

Yes, we can even waste more time on the Internwebs....

The myth of the Drupal Learning Curve

The Drupal Learning Curve. A much debated issue. Drupal was up to 2005 mainly build by people who didn't perse need the tool, were not scratching their own itch but explored the waters of what we call now the semantic web. Creating a tool we love, with a node system that is still ahead in many ways of other systems and with hooks that catch anything. But also a system that up to at least seven didn't give "the user" the best experience when it comes to the interface.


And here is where the normal rant ends. Because we need to stop thinking as "Drupal has a high learning curve for the user". Simply because it is false and by saying so we lie, scare of potential "users". Adding content in Drupal 7 is not harder then in for example Wordpress or Joomla; some fields, a preview, a sumbit and you are done. Sure, there is the everlasting WYSIWYG dilemma, and while Drupal does not deal with this in core it is as easy 'solved' as it is in a CMS where the editor is integrated.

There are at least two problems with "Drupal has a steep learning curve for the user".

  1. A fool with a tool, is still a fool

    First and foremost, we forget to define "'user". Drupal always has the vision to cut the middleman, make the webmaster obsolete, drop the database administrator and give the power of these roles to "the user", the content editor.

    The user was traditionally the person responsible for adding content. Now, this person is not just able to create content with adding some data to some field and press submit. This person is also able to make lists of most read items, create new content types and rearrange blocks on the site.

    When I said, First and foremost, we forget to define "'user", I meant, First and foremost, we redefined "user". And by redefining "user", by enabling a normal person to do things he (m/f) had to call IT before, by giving him the tools to excel beyond his normal duties, we created the UX problem. Where the normal User Interface of a database engineer was the command line, we gave the power of a webinterface as the user interface to the one who wants the change. From the database engineer's point of view a great step forward in UX; from the content editor's point of view a complex UI for something he might need only once in a while. Roles and rights to the rescue, one might think. But you can not undo the complex powerfull interface Joe Blogger sees when he installs Drupal under UID 1.

    Drupal doesn't have a bad UI, doesn't have a steep lurning curve. Drupal eliminates the middleman and does this by eliminating the process and procedures around change management of the technical backend and trades this for an unpolished frontend.

  2. Lego duplo upgrade

  3. Up or out?

    Second, we traditionally see the learning curve in relation to horizontal growth. We think that one can become a Chx, Unconed or Dries by submitting a blog post, then install Drupal, tweaking the User Interface, making a template, coding a module and then... one becomes a core maintainer. This classic "growth is seen from left to right" diagram tells it all:

    (copyright Dries)

    The truth is we have hunderds of thousands of people on the left and only a handful of people on the right. And most people who are on the right side of the diagram, didn't grow horizontally, they grew vertically. They came in with a background in programming, had programmed on other systems and choose Drupal because of it's code, community or license.

    And most people on the left are fine to be on the left and don't want to "grow" horizontally, they want to grow vertically. They are good at something and use Drupal for it. And might be good in what they do in the Drupal community and grow in that vertically, not horizontally. They want to become the best testers, community members or write the best documentation. The learning curve there has nothing to do with the tool Drupal, but with the commnuity, the person and the tasks at hand.

    In 2005 I read a very well written book on how great leaders stimulate vertical growth as well, let a waiter become the best waiter on the block, not the manager of the cafe. Dries, you still have my copy of "First Break All The Rules" :-)

    Drupal doesn't have a bad UI, doesn't have a steep learning curve. Drupal should adopt vertical and horizontal growth by eliminating the traditional vision on how people or communities can grow.


So lets do ourselves a favour and stop calling Drupal hard to learn. It is as true as saying "a bike is hard to learn". Biking is not, building or designing a bike is. Ooh, and while we are at it. Stop calling Drupal a WebCMS.

Fingerprinting a Drupal site, what version is that site running?

Fingerprinting a version of Drupal iste
Say you want to find out if a site is using Drupal. You could dive in to the headers as was described by Lullabot some time ago and see if it is the birthday of Dries in the headers:
Sun, 19 Nov 1978 05:00:00 GMT

A much easier way and more generic is installing the "BuildWith Technology Profiler" extension in Chrome(ium). This add-on not just finds Drupal sites, but also other CMS-es like WP, Joomla and dozen of others as well as scans to see if for example Google Analytics code is on the page. A must have for the curious browser. If you find a nice site, you might tag it in delicious with "yads" (yet another drupal site) and or "drupalsite", take a look at some of my findings at

Bit what if you want to know what version a specific Drupal site is running? Well, you could look for the CHANGELOG.txt file in the root but that file is often deleted. For good or for bad reasons. Personally I think it is good practise to give as little information as possible to the outside world, for example not echoing the version of the webserver you are running. This can be done in Apache by two lines
ServerTokens ProductOnly
ServerSignature Off

and this was done on as well.

There has been some debate if Drupal should hide it's text files as well, like CHANGELOG.txt. Some other CMS-es do this or use a DIE to protect it from prying eyes. In the end consensus was that removing these text files will not make your site more safe; good procedures and adequate updating of core and contributed modules will!

So fingerprinting most Drupal site is easy, one just looks at the CHANGELOG file and knows what version the site is running. Hoewever, if you dont trust the changelog file or it is removed, it is still rather easy to fingerprint a Drupal site.

It can for example be done in the following way:

  1. Download a couple of Drupal core files. Unzip / tar -x them.
  2. Go through all directories to see what files changed. This can be done by something like:
    diff -r -q drupal-7.7 drupal-7.8 | grep -iv info >> drupaldiffall
  3. Fingerprinting works best on JS or CSS files so grep the from drupaldiffall and put the in drupaldevjscss
  4. Now find the files that have changed most often.
    cat drupaldiffjscss | grep -i "files" | cut -d " " -f 2 | cut -d "/" -f 2,3,4,5,6,7,8,9,10 | sort | uniq -c | sort | tail -10
     12 misc/autocomplete.js
     12 misc/collapse.js
     12 misc/drupal.js
     12 misc/farbtastic/farbtastic.js
     12 misc/jquery.js
     12 misc/progress.js
     12 misc/tabledrag.js
     12 misc/tableselect.js
     12 misc/textarea.js
     12 modules/color/color.js

    So out of these lets pick the color.js file that changed 12 times. Note that with Drupal 7 CSS and JS most of the time don’t change at all where in the late 6 versions, these files changed more and more often. Hence the tail -10 outcomes will differ based on the source Drupal cores you downloaded (and yes I suck at regular expressions)
  5. The next step is to make the color.js file unique identifiable in all version. Here is where our old MD5 friend comes handy, the syntax might be different on BSD based systems versus GNU/Linux, but it will be something like:
    find ./ -name color.js | xargs md5 > rainbow
    And the rainbow file itself will be
    cat rainbow
    MD5 (.//drupal-5.22/modules/color/color.js) = 61098c218594ab871b48cd43459dc2ed
    MD5 (.//drupal-5.23/modules/color/color.js) = 61098c218594ab871b48cd43459dc2ed
  6. Now all we have to do is find the color.js file in a site we want to fingerprint and match it against this rainbow file:
    grep `curl | md5` rainbow
    MD5 (.//drupal-6.22/modules/color/color.js) = f5ea11f857385f2b62fa7bef894c0a55

    So according to this is running the latest stable 6 version. Doing the same for the Belgium/Dutch site will give you less useful information:
    grep `curl | md5` rainbow | wc -l

    So all we know now (if we didn't wc the outcome) is that is is one of the latest 7 versions of Drupal 7. So you have to start digging deeper:
    more drupaldiffjscss | grep "drupal-7" | grep "Files " | cut -d " " -f 2,3,4,5,6,7,8,910 | sort | uniq -c (or visit :-)

So why would one need this information you might ask. Since it is clear that in the wrong hands it will lead to... . Well, the bad guy knowing what version you are running. And to be honest, if the bad guy goes through so much trouble finding out what version you are running, (s)he was going to find out anyway.

But like all tools, it can be used for the Good. My employer takes over a lot of sites build by others (comes with the Drupal growing pains, the freedom of the GPL and the fact that the market is getting closure to an adolescent stage). Most of the times we have to give a raw estimate of maintaining and expanding the site, yet the prospect doesn't know what version he is running and doesn't want to ask his current supplier. By doing a quickscan on amongst others what version the site is running we know how well it was maintained and what budget would be needed to upgrade to the latest version. You might have a different usecase. For the Good.

Solved: Faces on iPhoto missing?

I *LOVE* iPhoto, I have stored over 10 years of digital snapshots in 10.000's of photos. And I added a lot of meta data; for example faces of people.

Recently after a big import, a crash of iPhoto and more, everything was still working, yet the faces were not being displayed, an empty cork board. A rather standard probem (Google search). Yet despite that it is a standard problem, the solution is not standard.

Many claim that right clicking your iPhoto library and opening with "Show content" (it is a directory after all!) and then delete the "face_blob.db" and "face.db" files will do the trick. It didnt for me. It just deleted all faces permanently. Actually I didnt remove them, just renamed them so restoring this part was easy.

I do have backups, a timemachine server with some disk that mirror 10 years of digital history. yet, when one activated timemachine, one only has the option to completly restore iPhoto. Not something I wanted to do, I imported all my vacation photos and deleted the om my iPhone and pointandshoot.

Now I could copy all these files (they are in teh iPhoto dir as well), restore from the backup and reimport, but that would take ages and there is a much smarter way. Just not via the timemacine interface

How to solve missing faces in iPhoto

This is what I did:

  1. Open the content of the iPhoto dir, copy the two faces files towards another place
  2. Start timemachine and close it, make sure you have a mounted timemachine disk now on your desktop
  3. start a terminal (spotlight:terminal :-)
  4. "cd" to the timemachine, eg cd /Volumes/TimeMachine/servername/data/location/of/iPhoto
  5. cp faces* towards a local map, eg cp face* /tmp/) or directly towards the iPhoto directory

Just remember, the timemachine disk and the iphoto icon are just directories.... Solved my problem, 1000 of happy faces again :-)

Cookies, privacy, politiek en vooruit rijden door in achteruitkijk spiegel te kijken

((c) Arty Smokes )
Het is altijd leuk te zien dat politiek helemaal niet vooruitzien is maar achteruit kijken. Een soort auto besturen door in de achteruitkijkspiegel te kijken. Het gaat prima, zolang de condities niet veranderen; er een bocht komt. Echter, in de huidige wereld verandert alles ten alle tijde continue. Bochten, hellingen, splitsingen, ze zijn aan de orde van de dag. Dus als de politiek zich bezig gaat houden met het sturen en de motor van de auto terwijl ze in de achteruitkijk spiegel kijkt, dan weet je dat we de vangrail gaan raken.

( (c) bass_nroll)
Toen heel Nederland zich 5 jaar geleden begon te ergeren aan de absurde kosten voor het gebruik van data op een mobiele telefoon in het buitenland, ging de politiek zich bezig houden over de hoge kosten van SMS in het buitenland. Ik gok dat de politiek zich ook bezig hield met de hoge prijzen voor haver voor paarden die internationale koetsen trokken, toen de de treinen al decenia tussen de landen pendelen. Jawel, regeren is vooruitzien met een achteruitkijk spiegel.

Zo is het ook amusant te zien dat de NL en EU zich plotseling druk gaan maken over "cookies", kleine text bestanden (in de campingbrowser) of regels in een text bestand in echte browsers). Van tekst bestanden kan je geen geslachtziekte krijgen. En al rond 1996 wist ik hoe ik er mee om moest gaan, blockeren wat je niet wil en gebruiken wat handig is. Zonder cookies kan je moeilijk inloggen op sites en dus gebruik maken van diensten omdat HTTP nu eenmaal stateless is. Een gebruiker wil niet zonder. En gebruiker weet echter meestal niet wat de nadelen zijn, een cookie wordt gebruikt om een gebruiker te identificeren en kan dus gebruikt worden om iemand te... identificeren. Een cookie kan per definitie enkel gelezen worden op het domain waarop deze gezet is, kan mijn cookie van dus niet lezen. Daar is geen probleem. Maar het probleem is dat er domains zijn die nagenoeg tegenwoordig op elke pagina vorkomen, advertenties van google bv in combinatie met google analytics. Google kan dus vrij eenvoudig een gebruiker volgen over 70% van alle internet sites. Probleem: Wellicht. Echt: Nee

Tuurlijk, Google (maar ook anderen als facebook die mij vraagt of ik als eerste van mijn vrienden iets wil "liken" op een site) weten heel veel van mij. En ondanks dat Google haar policy in 2007 aanpast heeft weet ze -en tientallen andere grote bedrijven en honderden ads agencies- heel veel van mij. Erg? Wel, ik heb liever goede reclames dan slechte. Behavioraal of niet.

Hoewel ik wel van mening ben dat het absurd is om en camera boven je webste te hangen om Google te laten meten wie er binnen en welke jas hij draagt voor enterprises en met name overheden. Zeker als er zeer goede alternatieven zijn door zelf de ruwe analyse van data te doen in realtime en "gratis" aan de hand van de opensource oplossing aan de hand van bijvoorbeeld En het lost een van de grootste problemen op; als websites webapplicaties worden zie je de enkel de pagina load; niet de interactie. Zie hiervoor mijn oude posting op when webpages become webapplications and the influence on statistics. Dus zelf je ruwe data analyseren, is de beste oplossing.

Natuurlijk doen alle uitgevers alsof de hemel vol cookies op ons dak komt. Maar ik ben geen Gallier, Ich bin ein Groninger. Dus niet bang voor de cookies of de hemel. Kom maar op.

((c) nettsu)

Een achterhoede gevecht dus dat cookie gedoe. Want er zijn echt heel veel andere manieren om een gebruiker uniek te tracken. Natuurlijk het source IP adres, maar dat is niet echt uniek als er vele adressen achter een "NAT" adres zitten. Maar ook de browser zelf is heel vaak uniek. Door de headers die verstuurd worden kan je zien welke versie het is, welke plugins geinstaleerd zijn en welke fonts ik heb. Die zaken samen zijn veel unieker dan men denkt en kunnen ook vor tracking gebruikt worden. Test je eigen browser eens op In mijn geval was mijn browser uniek op de reeds 1.6 miljoen geteste bowsers.

Your browser fingerprint appears to be unique among the 1,636,036 tested so far.
Currently, we estimate that your browser has a fingerprint that conveys at least 20.64 bits of identifying information.

Lees de informatie van de EFF op Every Browser Unique en de PDF

De combinatie van IP adres en browser maken echt wel dat Google of een ander bedrijf zonder cookies mij uniek kan tracken in mijn surfgedrag over het net. Natuurlijk wil men liever een gebruiker tracken dan een device, maar als ik af en toe gebruik maak met mijn IP adres en browser van Google diensten waarbij ik moet inloggen (google apps) ben ik honderd procent identificeerbaar.

Om duidelijk te maken hoe zeer dit vooruit rijden is door in de achteruitkijk spiegel te kijken, de hele cookie discussie is voorbij als we IPv6 hebben. IPv6 kent geen NAT, IPv6 maakt je device overal te wereld uniek. Per definitie; je koelkast, je TV, de PC van de kids. Allen kennen een uniek adres en dat is geen toekomst visie of vaag geblaat van een paarse broek. In mijn gezin zijn op elk moment van de dag zo'n 20 IP adressen in gebruik; iPhones, iPads, Macbooks, Mini mac, iMac, camara's, Wii en zelfs mijn TV hebben een IP adres. Nu nog ge-NAT maar straks echt een op een traceerbaar, lees IPv6 and the future of privacy

What does it mean to shift from the present addressing system (IPv4) to the ‘new’ system (IPv6)? To begin, it means that there is a lot more of IP real-estate; whereas IPv4 offers roughly 4.3 billion addresses, IPv6 provides 340 trillion trillion trillion (!) unique addresses. One can quickly appreciate the numerical difference. More significantly, it means that the system of LANs that we have today will no longer be required because of IP address scarcity. Each of the Internet-enabled devices in my home could have its own IPv6 address – there is no real need to route all the data through a single IP address that is provided by my ISP.
In a situation where all Internet enabled devices have a constant address, the regular refrain “we don’t know who’s IP address we’re monitoring; it is possible that a set of users are sharing the same address!” is quickly disabused. With a persistent IP address, depending on the degree of algorithmic surveillance, it is possible to develop very, very good understandings of who is presumably the agent ‘using’ the IP address. Similar to how marketers can figure out who you are with very little information, advertising companies such as Doubleclick are in a comparable situation to develop very detailed, very personal, accounts of the individuals that regularly use Internet enabled devices. In a situation where all devices have unique IP addresses, this could facilitate more accurate advertising (read: better targeted and more invasive), and that government agencies and ISPs alike could more accurately identify and track particular users online.

Heerlijk toch, de politiek houdt zich met problemen van 15 jaar geleden bezig. En implementeert oplossingen die over 5 jaar volledig onzin zijn. ""Telling the future by looking at the past assumes that conditions remain constant. This is like driving a car by looking in the rear view mirror." (Herb Brody) Dank Den Haag. Dank.

XML feed