Quick one: using download attribute on links to save Canvas as PNG

Posted By Christian Heilmann

One of the things I always liked about Firefox is that you can right-click any canvas and select “save image as”. Chrome and others don’t do that (well, Chrome doesn’t). Instead you need to get the image data, create a url and open a new tab or something in that order.

One very simple way to allow people to save a canvas as an image (and name it differently – which would have to be done by hand in the case of right-clicking on Firefox) is to use the download attribute of a link.

You can see this in action in this JSFiddle. Simply paint something and click the “download image” link.

The relevant code is short and sweet (canvas is the reference of the canvas element):

var link = document.createElement('a'); link.innerHTML = 'download image'; link.addEventListener('click', function(ev) { link.href = canvas.toDataURL(); = "mypainting.png"; }, false); document.body.appendChild(link);

10 Year Blogaversary

Posted By Hacking for Christ » Syndicate

10 years ago today, I started blogging.

At the time, I thought I was rather late to the blogging party. In hindsight, it seems not. Since then, through one change of host (thanks, Mozillazine!), there have been 1,463 posts (including this one), and 11,486 comments. Huge thanks to everyone who’s read and/or commented (well, if you commented without reading, not so much) in the last 10 years. It’s uncertain whether I or the blog will last another 10 (the Lord is in control of that) but here’s to however long it is!

Web Components and you – dangers to avoid

Posted By Christian Heilmann

Lego Legos by C Slack

Web Components are a hot topic now. Creating widgets on the web that are part of the browser’s rendering flow is amazing. So is inheriting from and enhancing existing ones. Don’t like how a SELECT looks or works? Get it and override what you don’t like. With the web consumed on mobile devices performance is the main goal. Anything we can do to save on battery consumption and to keep interfaces responsive without being sluggish is a good thing to do.

Web Components are a natural evolution of HTML. HTML is too basic to allow us to create App interfaces. When we defined HTML5 we missed the opportunity to create semantic widgets existing in other UI libraries. Instead of looking at the class names people used in HTML, it might have been more prudent to look at what other RIA environments did. We limited the scope of new elements to what people already hacked together using JS and the DOM. Instead we should have aimed for parity with richer environments or desktop apps. But hey, hindsight is easy.

What I am more worried about right now is that there is a high chance that we could mess up Web Components. It is important for every web developer to speak up now and talk to the people who build browsers. We need to make this happen in a way our end users benefit from Web Components the best. We need to ensure that we focus our excitement on the long-term goal of Web Components. Not on how to use them right now when the platforms they run on aren’t quite ready yet.

What are the chances to mess up? There are a few. From what I gathered at several events and from various talks I see the following dangers:

  • One browser solutions
  • Dependency on filler libraries
  • Creating inaccessible solutions
  • Hiding complex and inadequate solutions behind an element
  • Repeating the “just another plugin doing $x” mistakes

One browser solutions

This should be pretty obvious: things that only work in one browser are only good for that browser. They can only be used when this browser is the only one available in that environment. There is nothing wrong with pursuing this as a tech company. Apple shows that when you control the software and the environment you can create superb products people love. It is, however, a loss for the web as a whole as we just can not force people to have a certain browser or environment. This is against the whole concept of the web. Luckily enough, different browsers support Web Components (granted at various levels of support). We should be diligent about asking for this to go on and go further. We need this, and a great concept like Web Components shouldn’t be reliant on one company supporting them. A lot of other web innovation that heralded as a great solution for everything went away quickly when only one browser supported it. Shared technology is safer technology. Whilst it is true that more people having a stake in something makes it harder to deliver, it also means more eyeballs to predict issues. Overall, sharing efforts prevents an open technology to be a vehicle for a certain product.

Dependency on filler libraries

A few years ago we had a great and – at the same time – terrible idea: let’s fix the problems in browsers with JavaScript. Let’s fix the weirdness of the DOM by creating libraries like jQuery, prototype, mootools and others. Let’s fix layout quirks with CSS libraries. Let’s extend the functionality of CSS with preprocessors. Let’s simulate functionality of modern browsers in older browsers with polyfills.

All these aim at a simple goal: gloss over the differences in browsers and allow people to use future technologies right now. This is on the one hand a great concept: it empowers new developers to do things without having to worry about browser issues. It also allows any developer to play with up and coming technology before its release date. This means we can learn from developers what they want and need by monitoring how they implement interfaces.

But we seem to forget that these solutions were build to be stop-gaps and we become reliant on them. Developers don’t want to go back to a standard interface of DOM interaction once they got used to $(). What people don’t use, browser makers can cross off their already full schedules. That’s why a lot of standard proposals and even basic HTML5 features are missing in them. Why put effort into something developers don’t use? We fall into the trap of “this works now, we have this”, which fails to help us once performance becomes an issue. Many jQuery solutions on the desktop fail to perform well on mobile devices. Not because of jQuery itself, but because of how we used it.

Which leads me to Web Components solutions like X-Tags, Polymer and Brick. These are great as they make Web Components available right now and across various browsers. Using them gives us a glimpse of how amazing the future for us is. We need to ensure that we don’t become dependent on them. Instead we need to keep our eye on moving on with implementing the core functionality in browsers. Libraries are tools to get things done now. We should allow them to become redundant.

For now, these frameworks are small, nimble and perform well. That can change as all software tends to grow over time. In an environment strapped for resources like a $25 smartphone or embedded systems in a TV set every byte is a prisoner. Any code that is there to support IE8 is nothing but dead weight.

Creating inaccessible solutions

Let’s face facts: the average web developer is more confused about accessibility than excited by it. There are many reasons for this, none of which worth bringing up here. The fact remains that an inaccessible interface doesn’t help anyone. We tout Flash as being evil as it blocks out people. Yet we build widgets that are not keyboard accessible. We fail to provide proper labeling. We make things too hard to use and expect the steady hand of a brain surgeon as we create tight interaction boundaries. Luckily enough, there is a new excitement about accessibility and Web Components. We have the chance to do something new and do it right this time. This means we should communicate with people of different ability and experts in the field. Let’s not just convert our jQuery plugins to Web Components in verbatim. Let’s start fresh.

Hiding complex and inadequate solutions behind an element

In essence, Web Components allow you to write custom elements that do a lot more than HTML allows you to do now. This is great, as it makes HTML extensible (and not in the weird XHTML2 way). It can also be dangerous, as it is simple to hide a lot of inefficient code in a component, much like any abstraction does. Just because we can make everything into an element now, doesn’t mean we should. What goes into a component should be exceptional code. It should perform exceptionally well and have the least dependencies. Let’s not create lots of great looking components full of great features that under the hood are slow and hard to maintain. Just because you can’t see it doesn’t mean the rules don’t apply.

Repeating the “just another plugin doing $x” mistake

You can create your own carousel using Web Components. That doesn’t mean though that you have to. Chances are that someone already built one and the inheritance model of Web Components allows you to re-use this work. Just take it and tweak it to your personal needs. If you look for jQuery plugins that are image carousels right now you better bring some time. There are a lot out there – in various states of support and maintenance. It is simple to write one, but hard to maintain.

Writing a good widget is much harder than it looks. Let’s not create a lot of components because we can. Instead let’s pool our research and findings and build a few that do the job and override features as needed. Core components will have to change over time to cater for different environmental needs. That can only happen when we have a set of them, tested, proven and well architected.


I am super excited about this and I can see a bright future for the web ahead. This involves all of us and I would love Flex developers to take a look at what we do here and bring their experience in. We need a rich web, and I don’t see creating DOM based widgets to be the solution for that for much longer with the diversity of devices ahead.

Who We Are And How We Should Be

Posted By Hacking for Christ » Syndicate

“Every kingdom divided against itself will be ruined, and every city or household divided against itself will not stand.” — Jesus

It has been said that “Mozilla has a long history of gathering people with a wide diversity of political, social, and religious beliefs to work with Mozilla.” This is very true (although perhaps not all beliefs are represented in the proportions they are in the wider world). And so, like any collection of people who agree on some things and disagree on others, we have historically needed to figure out how that works in practice, and how we can avoid being a “kingdom divided”.

Our most recent attempt to write this down was the Community Participation Guidelines. As I see it, the principle behind the CPGs was, in regard to non-mission things: leave it outside. We agreed to agree on the mission, and agreed to disagree on everything else. And, the hope was, that created a safe space for everyone to collaborate on what we agreed on, and put our combined efforts into keeping the Internet open and free.

That principle has taken a few knocks recently, and from more than one direction.

I suggest that, to move forward, we need to again figure out, as Debbie Cohen describes it, “how we are going to be, together”. In TRIBE terms, we need a Designed Alliance. And we need to understand its consequences, commit to it as a united community, and back it up forcefully when challenged. Is that CPG principle still the right one? Are the CPGs the best expression of it?

But before we figure out how to be, we need to figure out who we are. What is the mission around which we are uniting? What’s included, and what’s excluded? Does Mozilla have a strict or expansive interpretation of the Mozilla Manifesto? I have read many articles over the past few weeks which simply assume the answer to this question – and go on to draw quite far-reaching conclusions. But the assumptions made in various quarters have been significantly different, and therefore so have the conclusions.

Now everyone has had a chance to take a breath after recent events, and with an interim MoCo CEO in place and Mozilla moving forward, I think it’s time to start this conversation. I hope to post more over the next few days about who I think we are and how I think we should be, and I encourage others to do the same.

Browser inconsistencies: animated GIF and drawImage()

Posted By Christian Heilmann

I just got asked why Firefox doesn’t do the same thing as Chrome does when you copy a GIF into a canvas element using drawImage(). The short answer is: Chrome’s behaviour is not according to the spec. Chrome copies the currently visible frame of the GIF whereas Firefox copies the first frame. The latter is consistent with the spec.

You can see the behaviour at this demo page: animated GIF on canvas

Here’s the bug on Firefox and the bug request in Webkit to make it consistent thanks to Peter Kasting there is also a bug filed for Blink.

The only way to make this work across browsers seems to be to convert the GIF into its frames and play them in a canvas, much like jsGIF does.

21st Century Nesting

Posted By Hacking for Christ » Syndicate

Our neighbours have acquired a 21st century bird’s nest:

Not only is it behind a satellite dish but, if you look closely, large parts of it are constructed from the wire ties that the builders (who are still working on our estate) use for tying layers of bricks together. We believe it belongs to a couple of magpies, and it contains six (low-tech) eggs.

I have no idea what effect this has on their reception…

Copyright and Software

Posted By Hacking for Christ » Syndicate

As part of our discussions on responding to the EU Copyright Consultation, Benjamin Smedberg made an interesting proposal about how copyright should apply to software. With Chris Riley’s help, I expanded that proposal into the text below. Mozilla’s final submission, after review by various parties, argued for a reduced term of copyright for software of 5-10 years, but did not include this full proposal. So I publish it here for comment.

I think the innovation, which came from Benjamin, is the idea that the spirit of copyright law means that proprietary software should not be eligible for copyright protections unless the source code is made freely available to the public by the time the copyright term expires.

We believe copyright terms should be much shorter for software, and that there should be a public benefit tradeoff for receiving legal protection, comparable to other areas of IP.

We start with the premise that the purpose of copyright is to promote new creation by giving to their authors an exclusive right, but that this right is necessary time-limited because the public as a whole benefits from the public domain and the free sharing and reproduction of works. Given this premise, copyright policy has failed in the domain of software. All software has a much, much shorter life than the standard copyright term; by the end of the period, there is no longer any public benefit to be gained from the software entering the public domain, unlike virtually all other categories of copyrighted works. There is already more obsolete software out there than anyone can enumerate, and software as a concept is barely even 50 years old, so none is in the public domain. Any which did fall into the public domain after 50 or 70 years would be useful to no-one, as it would have been written for systems long obsolete.

We suggest two ideas to help the spirit of copyright be more effectively realized in the software domain.

Proprietary software (that is, software for which the source code is not immediately available for reuse anyway) should not be eligible for copyright protections unless the source code is made freely available to the public by the time the copyright term expires. Unlike a book, which can be read and copied by anyone at any stage before or after its copyright expires, software is often distributed as binary code which is intelligible to computers but very hard for humans to understand. Therefore, in order for software to properly fall into the public domain at the end of the copyright term, the source code (the human-readable form) needs to be made available at that time – otherwise, the spirit of copyright law is not achieved, because the public cannot truly benefit from the copyrighted material. An escrow system would be ideal to implement this.

This is also similar to the tradeoff between patent law and trade secret protection; you receive a legal protection for your activity in exchange for making it available to be used effectively by the broader public at the end of that period. Failing to take that tradeoff risks the possibility that someone will reverse engineer your methods, at which point they are unprotected.

Separately, the term of software copyright protection should be made much shorter (through international processes as relevant), and fixed for software products. We suggest that 14 years is the most appropriate length. This would mean that, for example, Windows XP would enter the public domain in August 2015, which is a year after Microsoft ceases to support it (and so presumably no longer considers it commercially viable). Members of the public who wish to continue to run Windows XP therefore have an interest in the source code being available so technically-capable companies can support them.

Recommended Reading

Posted By Hacking for Christ » Syndicate

This response to my recent blog post is the best post on the Brendan situation that I’ve read from a non-Mozillian. His position is devastatingly understandable.

On Windows XP and IE6

Posted By Christian Heilmann

On Tuesday, Microsoft announced the end of support for Windows XP. For web developers, this meant much rejoicing as we are finally rid of the yoke that is Internet Explorer 6 and can now use all the cool things HTML5, CSS3 and other tech has to offer. Right? Maybe.


When I started web development my first real day-to-day browser was IE4 and then Netscape Navigator Gold moving on to Netscape Communicator 4. I saw the changes of IE5, 5.5 and finally IE6. I was pretty blown away by the abilities IE6 had. You had filters, page transitions, rotation, blurring, animation using marquee and even full-screen applications using the .hta extension. In these applications you had full JScript access to the system, you can read and write files, traverse folders and much more. Small detail: so had attackers as the security model wasn’t the best, but hey, details…

None of this was a standard, and none of it got taken on by other browsers. That probably wasn’t possible as features of browsers were the main differentiator and companies protected their USPs.

IE was never and will never be just a browser: it is an integral part of the operating system itself. For better or worse, Microsoft chose to make the web consumption tool also the file browsing and document display tool. Many of the very – at that time – futuristic features of IE6 were in there as they were needed for Powerpoint-style presentations.

That’s why the end of XP is a light at the end of the tunnel for all those suffering the curse that is IE6. Many users just didn’t bother upgrading their browser as what the OS came with was good enough.

A cracker’s paradise

Of course we now have a security problem: not all XP installs will be replaced and the lack of security patches will result in many a hacked server. Which is scary seeing that many ATMs run on XP and lots of government computers (the UK government alone spent 5.5m GBP on getting extended support for XP as moving on seems to be hard to do with that many machines and that much red tape). XP and IE6 weren’t a nuisance for web developers – they are a real threat to internet security and people’s online identity for a long time now.

The fast innovator in a closed environment dilemma

You can say what you want about IE6 - and it has been a running joke for a long time – having it and having it as the nemesis of web standards based browsers (Opera, Netscape6 and subsequently Firefox) taught us a lot. Having a browser that dared to dabble with applications in HTML years before the W3C widget spec or Adobe Air was interesting. Having a browser in the operating system that naturally was the first thing people clicked to go online helped the internet’s popularity. It didn’t help the internet as a whole though.

The big issue of course was that people didn’t upgrade and the OS didn’t force-upgrade the browser. This meant that companies had a fixed goal to train people on: if it works in IE6, it is good enough for us. That’s why we have hundreds of large systems that only work in IE. Many of those are enterprise systems: CRM, Asset management, Ticketing, CMS, Document management – all these fun things with lots of menus and trees and forms with lots of rules.

Nobody likes using these things. People don’t care for them, they just see them as a necessary thing to do their job and something created by magical hairy pixies called the IT department. When you don’t like something but need to use it any change in it is scary, which is why a lot of attempts to replace these systems with more user-friendly and cross-platform systems is met with murmurings or open revolt. I call this the Stockholm syndrome of interfaces: I suffered it for years, so I must like it, right? All the other stuff means more work.

Back to the browser thing though: The issue wasn’t IE6, the issues were its ubiquity, an audience that wasn’t quite web savvy yet and didn’t crave choice but instead used what was there, and Microsoft’s tooling centering around creating amazing things for IE first and foremost and maybe a fallback for other browsers. The tools locked into IE6 were most of the time not created by web developers, but by developers of .NET, classic ASP, Sharepoint and many other – great – tools for the job at hand. Everything seemed easy, the tools seemed far superior to those that cover several targets and when you stayed inside the ecosystem, things were a breeze. You didn’t even have to innovate yourself – you just waited until the platform added the next amazing feature as part of the build process (this even happened at awesome events that only cost your employer and means you can get an awesome T-Shirt to boot). Sound eerily familiar to what’s happening now in closed platforms and abstracted developer tools, doesn’t it? Look – it’s the future, now – if you use platform x or browser y.

What should we take away from this?

Which brings me to the learning we should take away from these years of building things for a doomed environment: browsers change, operating systems change, form factors change. What we think is state-of-the-art and the most amazing thing right now will be laughable at best or destructive to innovation at worst just a year ahead.

And it is not Microsoft’s fault alone. Microsoft have seen the folly of their ways (OK, with some lawsuits as extra encouragement) and did a great job telling people to upgrade their systems and stop targeting OldIE. They understand that not every developer uses Windows and made testing with virtualisation much easier. They are also much more open in their messaging about what standards new IE supports. If they understand this, we should, too.

Here are the points we should keep in our heads:

  • Bolting a browser into an operating system makes it harder to upgrade it – you see this now in Android stock browsers or iOS. Many of the amazing features of HTML5 need to be poly-filled, not for old IE, but for relatively new browsers that will not get upgraded because the OS can’t get updated (at times on hardware that was $500 just a few months ago)
  • Building software for the current state of the browser is dangerous – especially when you can’t trust the current state to even be stable. Many solutions relying on the webkit prefix functionality already look as silly as a “if (document.layers || document.all) {}” does.
  • Stop pretending you can tell end users what browser to use – this is sheer arrogance. Writing software means dealing with the unknown and preparing for it. Error handling is more important than the success case. Great UX is invisible – the thing just works. Bad error handling creates unhappy users and there is nothing more annoying than being on a pay-by-the-minute connection in a hotel to be told I need to use another browser or update mine. Stop pretending your work is so important people have to change for you if all you need to do is being more creative in your approach.

There are only a few of us unlucky enough to have to support IE6 in a pixel-perfect manner right now. The death of XP wasn’t the big liberation we really needed. And by all means it should not mean that you write web apps and web sites now that rely on bleeding edge technology in newer browsers without testing for it. This will never go away, and it shouldn’t. It makes us craftsmen, it keeps us on the ball. We need to think before we code, and – to me – that is never a bad idea.

The rules did not change:

  • HTML is king – it will display when everything else fails, it will perform amazingly well.
  • Progressive Enhancement means you write for now and for tomorrow – expect things to break, and plan for it, and you can never be surprised.
  • Browser stats are fool’s gold – who cares how many people in Northern America who have a certain statistics package installed use browser x or browser y. What do your end-users use? Optimise for form factors and interaction, not for browsers. These will always change.
  • Writing for one browser helps that one in the competition with others, but it hurts the web as a whole – we’re right now in a breakneck speed rat-race about browser innovation. This yields a lot of great data but doesn’t help developers if the innovations vanish a few versions later. We have jobs to do and projects to deliver. There is not much time to be guinea pigs
  • Real innovation happens when we enhance the platform – we need WebComponents in the browsers, we need WebRTC in browsers, we need a more stable Offline and Local Storage solution. What we don’t need is more polyfills as these tend to become liabilities.

So, RIP XP, thanks for all the hardship and confusion that made us analyse what we do and learn from mistakes. Let’s crack on with it and not build the next XP/IE6 scenario because we like our new shiny toys.

Mozilla Voices

Posted By Hacking for Christ » Syndicate

I invited people to email me; here’s what they have been saying.

I fear that Mozilla showed a weakness, when we replied to that initial complaint. We showed people we care about what they had to say about Brendan, and about politics. I think we shouldn’t. …

Although technically we are still good, I fear that our community is strained right now. We need to forget all politics, and focus on the mission. Only the mission. We shouldn’t care about other things. Hopefully we will pull through…

Recent events have made me very angry, and the more I think about it, the angrier I get. …

Brendan understood that for Mozilla to be successful in its mission, participants needed to check their prejudices at the door and work together to build this great thing. And he himself compartmentalized his prejudices away from his work life.

He awarded others this tolerance, but in the end was not awarded it himself by others.

While I am myself a strong supporter of equal marriage rights, I am shocked by what was done to Brendan. It was truly vindictive and intolerant, completely unbecoming of a movement that claims to fight for tolerance.

I am not sure what you will do with the feedback you get, but if you can, in the middle of the rest, express that there exists a point of view that the leadership does not listen well enough and needs to open up lines of communication to the leadership from employees, the community and even non-community users, that idea would be worth communicating.

I feel that Brendan was unfairly persecuted for expressing his views even though it seems evident he never allowed any personal views to affect his ability to function.

People have been justifying bashing his position on the basis that equality is normally and editorially required for any position of power. Unfortunately these people are either bordering on misinformed or purely idiotic.

I am surprised at how mean people can be toward Brendan. It is a big loss for Mozilla.

I have been using Firefox since it was called Phoenix. I have installed it on many PCs. I learned Javascript on Firefox. I was loyal to Firefox during the difficult years when it had memory and speed issues. I was generally impressed with Mozilla’s stance on the Open Web. Now, I am not so impressed with Mozilla.

Somebody has been forced to resign from Mozilla because of his beliefs/ideas/opinions. That is exactly the opposite of what Mozilla states to be its “mission” …

I find it horrific that this backlash is a repeat of what you experienced two years ago. And it’s deeply affected me in my impression of how welcomed Christians are at Mozilla.

If you want your voice heard, or just want to talk in confidence (say if so), please email me.

Your Ire Is Misdirected

Posted By Hacking for Christ » Syndicate

Hi. My name is Gervase Markham. I’m a supporter of traditional marriage, and I work for Mozilla. In fact, as far as being on the record goes, I believe I’m now the only one.

Many people who agree with me on this issue are very upset about what happened to Brendan Eich, our co-founder and, for two weeks, CEO of the Mozilla Corporation. Brendan was appointed and then, after 10 days under the Internet’s lens of anger based on his donation in opposition to the redefinition of marriage, stepped down and stepped away from Mozilla – to our great loss.

I am assured by sources I trust that Brendan decided to leave of his own accord – he was not forced out. My understanding is that the senior management of Mozilla (many of whom disagree with him on this issue) worked very hard to support him, even if I would not agree with all the actions they took in doing so. However, he eventually felt that it was impossible for him to focus on leading if he was spending all of his time dealing with the continued, relentless news and social media storm surrounding the donation he made. In other words, he wasn’t forced out from the inside – he was dragged out from the outside.

So, here’s my plea: please don’t be angry with Mozilla. Mozilla and what it does and stands for is too important to the future of the free web to allow this to do it damage. It was us who brough innovation back to the web browser market and started the process which led to the awesome web you use today. And now, we’re trying to do the same with the closed smartphone market. I believe that connecting billions of people in the developing world to the web at minimal cost and with full fidelity will lead to the next great advance in human flourishing, as people can use the information they discover to make their own lives better. That’s our goal.

If you can’t find it in your heart to forgive them (the course I would recommend), then your anger is best directed at those outside Mozilla who made his position untenable. The press that twist and sensationalize without investigation, social media which magnifies and over-simplifies without consideration, and those who rush to judgement without understanding. I’m not going to name names or organizations. But as far as Mozilla itself goes, please, please continue to support us.

I am determined to work to make all Mozillians of whatever beliefs – and whatever actions they take outside of Mozilla in support of those beliefs – confident that, if they can work with other Mozillians as Brendan did so well for 15 years, Mozilla is a place for them. How successful we’ll be at that depends on how our community deals with what just happened – but it also depends on you. If you jump to paint Mozilla in the colours of ‘the opposition’, that will become a self-fulfilling prophecy. And the world will be poorer for it.

Mozilla is caught in the middle of a worldview war. Let’s not make the free web a casualty.

Fear, Anger and Gloat – or how to deal with a communication nightmare

Posted By Christian Heilmann

Being in the middle a communication nightmare is never fun, but it is an important learning experience. I am sure that most problems start with miscommunication and escalate from there.

Say something happens, that you very much disagree with. Someone says something that attacks you personally, your beliefs or a group that you very much identify you with.

This doesn’t feel good, and it starts a few other feelings. It could be anger, disgust, annoyance, helplessness, fear, embarrassment, insecurity, just to name a few. None of those are good feelings. Some can be turned to good results but most make you feel at the lowest level unproductive and at the other end utterly shattered.

Let’s take a look at the most common ones:


fear makes people to horrible things

“Fear is the mind killer” is absolute truth. People who are afraid stop contributing and are silenced. This is how totalitarian regimes work: you show yourself as all-powerful and the one to make decisions and you silence all of those who speak against you in a very public and brutal fashion. This makes everyone live in fear – citizens and enemies alike. Fear makes you feel helpless, you don’t want to speak up as you don’t want to stand out. In the worst cases you don’t want to speak out as it would punish all the ones you love. You don’t want to speak as you will feel the brunt of the loud and aggressive masses. You have input to give but you feel that it isn’t fair that because of what you stand for you get pushed into a certain group in a loggerheads scenario of black and white.


Anger can be productive. I am angry at myself to let my flat get to the state it is in now, so I am cleaning up. Anger can also be the end of any sensible discussion or dangerous. I cycle a lot in London. People cut into my lane, people push closer to me than they should. I could knock on their cars or shout at them. That would most likely get me killed as it would distract them and startle them into violent movements. Sometimes the best is to count to 10 and let it pass. Anger has an unfortunate tendency to pile up.

Holding on to anger is like drinking poison and expecting the other person to die - Buddha


In most long communication problems sooner or later it turns out that one of the attackers isn’t without flaws and innocent either. This shouldn’t be a surprise – we’re all human. In many cases the most avid attackers of a cause are people who are just afraid of being the thing they attack. Fear again. Gloating about this is toxic. It is a game of throwing the blame back and forth that nobody can win.

LinuxLive Bristol 2014

Posted By Leo McArdle

I wrote the bulk of this post shortly after the event but then due to a number of things, including work, forgetfulness and the recent events within Mozilla, I never edited and published it. In the spirit of ‘better late than never’ here it is:

Most mornings I would groan at having to get up at 5:30, and I would love to say one Saturday three weeks ago was an exception. Sadly, it wasn’t, but the grogginess of having to wake up so early had soon cleared away by the time I was on a rather empty train going down to Bristol.

That Saturday I spent my day helping out at a small event in Bristol, organised by the local Bristol and Bath Linux User Group, aimed at converting users of the soon-to-be EOLed Windows XP to a Linux distribution.

While the event wasn’t quite as well attended as the organisers had hoped, there were still a number of attendees I managed to talk to about Mozilla and Firefox, and the time not spent talking to attendees was filled with other interesting discussions.

I had a number of goals for the event but a few of these got thrown out of the window when the technological knowledge of most attendees was slightly higher than I expected. Because of this, rather than spend time explaining what a web browser was to people, I focused on telling the Mozilla story, and helping users with any problems they had in Firefox.

I also ended up imparting some Linux knowledge to attendees, being a reasonably longtime user of Arch Linux.

Only the other day I was invited by one of the organisers to a re-run of the event. I fully intend to attend, as with a few more people there, this event format feels like it could be very successful.

My thanks goes out to the organisers: the event was a great opportunity to talk to people about the Mozilla story, and meet some ‘real life’ Firefox users and help and discuss problems they had (one of which I intend to blog about when I get round to it).

Both images by David Fear used under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.

Recent Events

Posted By Hacking for Christ » Syndicate

It’s possible some people may want to discuss or give their view on recent events but, given the strength and tone of opinion expressed already, may not feel safe doing so in public. If that’s true of you, please feel free to email me at I’m available to talk.

I may produce anonymous summaries of what people are saying to me so that others can understand how people are feeling; I want everyone to feel their voices can be heard. But if you want that not to happen for you, just say.

If you need it, you can find my PGP public key here.

LinkedIn Moving to Always-On HTTPS

Posted By Hacking for Christ » Syndicate

I didn’t see this article by LinkedIn when it was first posted in October last year. But it warms the heart to see a large company laying out how it is deploying things like CSP, HSTS and pinning, along with SSL deployment best practice, to make its users more secure. I hope that many more follow in their footsteps.

IE11, Certificates and Privacy

Posted By Hacking for Christ » Syndicate

Microsoft recently announced that they were enhancing their “SmartScreen” system to send back to Microsoft every SSL certificate that every IE user encounters. They will use this information to try and detect SSL misissuances on their back end servers.

They may or may not be successful in doing that, but this implementation raises significant questions of privacy.

SmartScreen is a service to submit the full URLs you visited in IE (including query strings) to Microsoft for reputation testing and possible blocking. While Microsoft tries to reassure users by saying that this information passes to them over SSL, that doesn’t help much. It means an attacker with control of the network can’t see where you are browsing from this information – but if they have control of your network, they can see a lot about where you are browsing anyway. And Microsoft has full access to the data. The link to “our privacy statement” in the original SmartScreen announcement is, rather worryingly, broken. This is the current one, and it also tells us Each SmartScreen request comes with a unique identifier. That doesn’t contain any personal information, but it does allow Microsoft, or someone else with a subpoena, to reconstruct an IE user’s browsing history. The privacy policy also says nothing about whether Microsoft might use this information to e.g. find out what’s currently trending on the web. It seems they don’t need to provide a popular analytics service to get that sort of insight.

You might say that if you are already using SmartScreen, then sending the certificates as well doesn’t reveal much more information to Microsoft about your browsing than they already have. I’d say that’s not much comfort – but it’s also not quite true. SmartScreen does have a local whitelist for high traffic sites and so they don’t find out when you visit those sites. However (I assume), every certificate you encounter is sent to Microsoft, including high-traffic sites – as they are the most likely to be victims of misissuance. So Microsoft now know every site your browser visits, not just the less common ones.

By contrast, Firefox’s (and Chrome’s) implementation of the original function of SmartScreen, SafeBrowsing, uses a downloaded list of attack sites, so that the URLs you visit are not sent to Google or anyone else. And Certificate Transparency, the Google approach to detecting certificate misissuance after the fact which is now being standardized at the IETF, also does not violate the privacy of web users, because it does not require the browser to provide information to a third-party site. (Mozilla is currently evaluating CT.)

If I were someone who wanted to keep my privacy, I know which solution I’d prefer.

Firefox OS Brand Requirements Brown Bag: Today, 9am PDT (4pm UTC)

Posted By Hacking for Christ » Syndicate

[Update: A recording is available for vouched Mozillians]

Late-breaking news – spread the word to anyone interested in the requirements we put on carriers in order to sell phones branded Firefox OS; i.e. the way we bring our values into the mobile world.

Pete Scanlon writes:

Hello Mozilla Community

Wanted to extend an invitation to have you join a brown bag session tomorrow[today -- Gerv], March 25th at 9am PDT to discuss the requirements for device manufacturer and network operator partners to use the Firefox OS name and word mark in their efforts to bring the mobile web to more people in more places.

Given the size of the opportunity and the associated scope, we will focus our conversation on the level of devices that represent a web based operating system only. This group includes smartphones, tablets, smart televisions, and the emerging “Internet of Things” category.

The discussion will be live streamed on Air Mozilla for all Mozillians (you will need to sign in with your LDAP[Mozillians, I hope -- Gerv] credentials) and will be available afterward for your reference.

Join us on IRC in the #townhall channel. [There's a password; ask a staff member who got the original email -- Gerv].

Many thanks for your interest.

Pete Scanlon

Performance Comparison

Posted By Hacking for Christ » Syndicate

Mozilla 1.0 was released 12 years ago, in 2002. We did a new start page for the browser, using cutting-edge web technology. It can still be found on (It’s suffered a little bit in the archiving… The transparent PNG dino tail is broken, and www-archive claims wrongly that it’s UTF-8 so you have to pick “Western” in the Character Encoding menu to fix the � characters.)

There’s a little easter egg in it. If you mouse over the word “Party”, then it launches little “DHTML” (that’s what we called it back then, when I were a lad) fireworks. It only fires one firework at a time. We tried it with more, but we discovered performance problems with manipulating more than a dozen “particles” smoothly at once.

How far we’ve come.

Mozilla and the Future

Posted By Hacking for Christ » Syndicate

I am delighted that Brendan Eich has been named the new CEO of the Mozilla Corporation.

At this time of transition, I would like to encourage Mozilla community members to focus on, and to blog about, the future they’d like to see for the project. I’d love to read where others think we should be going, and I hope Brendan would too.

Here are my thoughts:


  • We should make sure that our requirements on Firefox OS carriers and OEMs for openness, transparency and Mozilla-ness are stringent enough that least a few say “No, we can’t do that; we’ll go elsewhere”. If everyone agrees to your terms, you aren’t asking for enough.
  • We should have a community conversation about what those requirements should be. (Some of what I think is outlined by implication in my recent post Mozilla and Proprietary Software.) To that end, I’m delighted that today there’s a brown bag on the Firefox OS Brand Requirements, which is the document which defines them. (It’s not currently clear who can come to this brown bag; I’m trying to get clarity.)
  • We should get down to a price point for our lowest-end phone – say $25 – and then ride Moore’s Law up the hardware spec scale at that price point. That is to say, at some point we should stop trying to make Gecko smaller and focus on improving the capabilities without growing it faster than the hardware is growing at that price point. Let’s not bet against Moore’s Law.
  • We shouldn’t try and compete at the high end, but we do need to move into the mid market, because that’s where there’s both global volume and money. (At the low end there’s volume but little money, and at the high end there’s money but less volume.) If we can’t make our ecosystem pay for developers and operators, it won’t grow.


  • Brendan has talked about differentiating Firefox in the “trust” area. We should ship Collusion as part of Firefox for desktop, with surrounding explanation of what it means. We should build and ship Tor on Firefox OS (and find a way to extend the Tor network while doing so).
  • The average network connection of the average customer is getting worse, because more people are coming online in places where the networks suck – both in bandwidth and latency. We need to make that less painful. HTTP/2 is one way; perhaps there are others, in collaboration with operators and sites. And our offline app support needs to be awesome.
  • We’d probably need a consensus to do it, but we should try and build one, and make some new web features HTTPS-only.

Governance and Community

  • We need to figure out how our project should be governed, and how those governance structures mesh with the org chart of the Mozilla Corporation. Having clear community governance is vital if we want to grow the community and allow non-employees to take on positions of responsibility. You can’t take a position which doesn’t exist.
  • Our community governance structures need to cover all that we do, not just a portion as now, and they probably need to change to meet the needs of the Mozilla of 2014.
  • We need to help our new mobile partners live in our world and become fully-fledged participants and contributors. If we end up just being an upstream code source, that’s a loss for us.

Treeclosure stats

Posted By The Automated Tester

As the manager of the sheriffs I am always interested in how ofter the sheriffs, and anyone else, closes the tree. For those who don't know who the Mozilla Sheriffs are, they are the team that manage the code landing in a number of Mozilla trees. If a bad patch lands they are the people who typically back it out. There has been some recent changes in the way the infrastructure does things which has led to a few extra closures. Not having the data for this I went and got it (you can see the last year's worth of data for Mozilla-Inbound below) 2013-03 infra: 14:59:38 no reason: 5 days, 12:13:31 total: 6 days, 3:13:09 2013-04 infra: 22:21:18 no reason: 3 days, 19:30:21 total: 4 days, 17:51:39 2013-05 infra: 1 day, 2:03:08 no reason: 4 days, 11:30:41 total: 5 days, 13:33:49 2013-06 checkin-compilation: 10:04:17 checkin-test: 1 day, 5:48:15 infra: 18:44:06 no reason: 5:05:59 total: 2 days, 15:42:37 2013-07 backlog: 22:38:39 checkin-compilation: 1 day, 13:05:52 checkin-test: 2 days, 16:43:53 infra: 1 day, 2:16:02 no reason: 0:30:13 other: 1:32:23 planned: 4:59:09 total: 6 days, 13:46:11 2013-08 backlog: 4:13:49 checkin-compilation: 1 day, 23:49:34 checkin-test: 1 day, 12:32:35 infra: 13:06:19 total: 4 days, 5:42:17 2013-09 backlog: 0:21:39 checkin-compilation: 1 day, 8:27:27 checkin-test: 2 days, 15:17:50 infra: 15:34:16 other: 2:02:07 planned: 3:16:22 total: 4 days, 20:59:41 2013-10 checkin-compilation: 15:29:45 checkin-test: 3 days, 10:41:33 infra: 16:31:41 no reason: 0:00:05 other: 0:09:01 planned: 2:30:35 total: 4 days, 21:22:40 2013-11 checkin-compilation: 1 day, 9:40:25 checkin-test: 4 days, 18:41:35 infra: 1 day, 19:11:36 no reason: 0:05:54 other: 3:28:40 planned: 1:50:20 total: 8 days, 4:58:30 2013-12 backlog: 5:07:06 checkin-compilation: 18:49:29 checkin-test: 1 day, 16:29:16 infra: 6:30:03 total: 2 days, 22:55:54 2014-01 backlog: 1:54:43 checkin-compilation: 20:52:34 checkin-test: 1 day, 12:22:01 infra: 1 day, 5:37:14 no reason: 1:20:46 other: 4:53:42 planned: 3:48:16 total: 4 days, 2:49:16 2014-02 backlog: 3:08:18 checkin-compilation: 1 day, 12:26:35 checkin-test: 15:30:42 infra: 19:40:38 no reason: 0:00:16 other: 0:47:38 total: 3 days, 3:34:07 2014-03 backlog: 8:52:34 checkin-compilation: 19:27:21 checkin-test: 1 day, 0:37:55 infra: 19:47:13 other: 2:53:21 total: 3 days, 3:38:24

I created a graph of the data showing Mozilla Inbound since we started using it in August 2012 till now.

Closures by reason on Mozilla Inbound

The first part of the graph there wasn't any data for specific things but the sheriffs changed that in the middle of last year. I am hoping that we can get information like this, and other interesting back out info into a "Tree Health Report" in Treeherder (The TBPL replacement the Automation and Tools Team is developing).

Mozilla and Proprietary Software

Posted By Hacking for Christ » Syndicate

Mozilla is both a principled organization and a pragmatic one.

Mozilla products run on proprietary operating systems, and on proprietary hardware. We are in the mobile OS business, and no-one, not even the mighty Google, has yet been able to make a 100% open source phone available in commercial quantities. So proprietary software is part of Mozilla’s life. But I think most people in our community would be rightly upset if Mozilla decided, for example, to take advantage of the provision in the MPL which would allow us to ship proprietary builds of Firefox on the desktop.

So, the question arises: where’s the line? Where in the big picture is it OK for proprietary software to be, and where is it not OK?

“You don’t have to make a case for open. You have to make a case for not open.” — Johnathan Nightingale

Over time, this question has been arising in a number of different contexts. And I think the answers we might give at the Mozilla project would be different to those you might hear from the FSF, or the Apache project, or the Android project, to name but three points on a wide spectrum of opinion. So I think it would be a productive conversation to try and work out some principles in this area – or, at least, to gauge the range of opinion. As johnath says, if we are using or distributing closed software, we need to make an active case for why we are doing it.

This post is therefore a discussion starter, and outlines where I currently think the line is – i.e. where a reasonable case can be made for closed, and where it cannot. It could be in the future, a case can be made for additions to, removals from or modifications of this list. But having a defined list at least helps to make it more clear what is a new situation where a case needs to be made, what is another example of something we’ve done before.

Note that this post represents my opinion only, and is not official Mozilla policy. Although it speaks of things that currently are, as well as things that currently are not, for ease of reading, I will write directly rather than using conditional language (i.e. “will” rather than “should”).

ol.main { list-style-type:none; counter-reset:item -1; } ol.main > li { counter-increment:item; margin-bottom: 1em; padding-left: 0px; } ol.main > li:before { content:counter(item) ". "; font-weight: bold; }


  1. The basic rule is that software written by Mozilla will be open source. Mozilla is a public benefit organization; we do not use money given to us to write proprietary software.

    Rationale: Manifesto Principle 7.

  2. Mozilla may distribute proprietary software written by others with its own software under the following circumstances:
    1. If it’s a missing important piece of functionality provided by an OS vendor for a proprietary operating system on which our software runs;
    2. If the software is required to make use of the hardware on which the product runs, and there is no open source alternative driver of sufficient quality.

    Example of A): the Direct3D DLL, included under the Binary Components policy. Example of B): hardware drivers for Firefox OS.

    These situations are seen as sub-optimal and we look for opportunities to eliminate them, as opportunity and market power permit. They are not seen as precedent-setting. This is a negotiating point in discussions with hardware manufacturers, particularly for reference devices.

    In the past, we shipped the “Talkback” crash-reporting software, which did not fall under either of these exceptions. We now use the open source Breakpad. This replacement took seven long years to arrive. Now that Talkback is gone, we should not go back there.

    Rationale: without such exceptions, we can’t ship competitive products (or, in the case of B, any products at all). But we need to define them tightly.

  3. Mozilla’s products will execute proprietary code in web content.

    Example: most JavaScript on the web today.

    Rationale: without this, our products would effectively not browse the web at all.

  4. Partners

  5. Mozilla may permit its partners to distribute proprietary software in a product using a Mozilla brand under the circumstances above. Mozilla’s partners may also ship proprietary apps in their versions of Firefox OS. Such apps must be uninstallable. Additions to the platform not falling under one of the exceptions above must be open source.

    Rationale: same as above, plus requiring that all default partner apps be free software means many popular apps could not be bundled, making our offering much less compelling. If we allow users to install proprietary apps, there is not significant additional harm in bundling (uninstallable) ones. Requiring arbitrary platform additions to be open source is necessary to allow users to build updated versions of the software for their phones. (Binary driver blobs use a known API and, while it’s sub-optimal, can be copied from official builds into user ones.)

  6. Mozilla will only allow Mozilla brands to be used for software on phones which are bootloader-unlockable.

    Rationale: Mozilla stands up for user freedom, including the freedom to hack one’s phone, and update the OS even when the vendor has ceased support.

  7. Software Added Later

  8. Mozilla’s products may sometimes automatically download and install deterministically-built binary builds of other open source software where we would prefer not to distribute it ourselves, e.g. for patent license reasons. However, there may be additional requirements we would want to be met before we solved a problem using this solution.

    Example: Cisco’s H.264 binary builds made from OpenH264. (Note: the exact user experience in this case has not yet been determined. I am just saying that I think it would be OK if Firefox downloaded and installed this software automatically.)

    Rationale: Software patents suck. Because Cisco have made H.264 free-as-in-price at the point of use for everyone, we managed to get a draw in this particular round of the codec wars. (The other options were much worse.) But fighting patents is done at the standards and industry level, not at the “make every user click a button” level. If the source is open and the binaries are deterministically built, then users are using binaries of free software which is bit-for-bit identical to that we could build for them ourselves, and so requiring a user confirmation here gains us nothing.

  9. Mozilla will allow proprietary software in the app stores and addons stores that it runs. Mozilla will make sure the license terms for software are clearly marked, and are searchable as a metadata field.

    Example: Firefox OS Marketplace, (Unfortunately, license metadata is not currently collected or available for searching.)

    Rationale: some people, including members of our community and vocal Mozilla supporters, wish to avoid using proprietary software; we should help them make choices in line with their ethics.

  10. Mozilla’s products may give the user the UI option of downloading, installing and running binary builds of proprietary software (e.g. an addon or plugin) but will not get to the point of executing such software without getting explicit or implicit user consent somewhere along the way. “Implicit consent” means that the user has taken some action (e.g. installing the Flash plugin themselves) which was not mediated by Firefox but which we know must have happened.

    Example: Mozilla allows users to download proprietary Firefox add-ons through the Add-On Manager UI. The Plugin Finder Service will point users at downloads of proprietary plugins such as Flash. But all require at least one explicit confirming click to install.

    Rationale: some people, including members of our community and vocal Mozilla supporters, wish to avoid executing proprietary software; we should not sneakily run it on their systems. However, even offering it is enough for Firefox to not be in the FSF’s directory of free software. :-|

  11. Network Services

  12. Mozilla prefers to use open source software for end-user network services it builds into its products. However, we are willing to partner with companies who use proprietary software and/or data. Such proprietary services must be able to be disabled by the user, and the API endpoint must be configurable by the user or 3rd party software such as an extension (e.g. an about:config setting in Desktop Firefox).

    Examples: Safe Browsing, geolocation.

    Rationale: Mozilla is starting efforts in geolocation, speech recognition and translation to either replace or avoid depending on proprietary services in these areas. But building e.g. a replacement for Google Safe Browsing, which protects many, many Firefox users from malware and phishing every day, would be a mammoth undertaking. And removing it would put our users at significant risk. Endpoint URL configurability allows people to reverse-engineer service APIs and implement alternatives which Firefox can then easily use.

  13. Development

  14. Mozilla’s products will run on proprietary operating systems, and therefore may require proprietary software, such as a compiler or SDK, as part of the build process for such systems. Mozilla’s products will not require proprietary tools to build on free operating systems.

    Example: Release builds of Firefox on Windows are built using Microsoft Visual Studio, and most developers on Windows use it for their builds too.

    Rationale: if one is using a proprietary OS, there seems no additional harm in using proprietary build tools.

  15. Mozilla strongly prefers to use open source software for network services it stands up for use by the Mozilla developer community, but may use proprietary software if no open source software of equivalent functionality is available. In such cases, Mozilla provides some resources (money or people) to help rectify that situation.

    Example: Mozilla uses Vidyo, and so Mozillians who want to use it have to use the proprietary Vidyo client, or Flash. But we are developing WebRTC in the browser, and hope that thereby solutions will emerge where people can participate in multi-party video using only open source software. We are also trying to get the SIP gateway working (that bug is restricted to the ‘infra’ group so you may not be able to see it) so people can video-call in using free software.

    Rationale: we should not compromise our effectiveness by using significantly sub-standard tools; but as a member of the wider open source community and as a public benefit organization, we have a responsibility to grow the commons in areas where we have an interest.

  16. Mozilla community members are free to use proprietary desktop software if they wish. Mozilla may therefore pay for licenses for particular bits of proprietary software for the use of Mozilla employees, contractors or interns. Mozilla will not implement systems which require non-employees to use proprietary desktop software to be part of the community.

    Example: Windows, Office, internal payroll or HR systems. (Vidyo doesn’t quite break that last rule, as someone can still dial in to any meeting by phone.)

    Rationale: there are no effective substitutes for some of this software. However, we should not lock free software advocates out of full participation in our community.

Management is hard

Posted By The Automated Tester

I have been a manager within the A*Team for 6 months now and I wanted to share what I have learnt in that time. The main thing that I have learnt is being a manager is hard work.

Why has it been hard?

Well, being a manager is requires a certain amount of personal skills. So being able to speak to people and check they are doing the tasks they are supposed to is trivial. You can be a "good" manager on this but being a great manager is knowing how to fix problems when members of your team aren't doing the things that you expect.

It's all about the people

As an engineer you learn how to break down hardware and software problems and solve them. Unfortunately breaking down problems with people in your team are nowhere near the same. Engineering skills can be taught, even if the person struggles at first, but people skills can't be taught in the same way.

Working at Mozilla I get to work with a very diverse group of people literally from all different parts of the world. This means that when speaking to people, what you say and how you say things can be taken in the wrong way. It can be the simplest of things that can go wrong.


Being a manager means that you are there to help shape peoples careers. Do a great job of it and the people that you manage will go far, do a bad job of it and you will stifle their careers and possibly make them leave the company. The team that I am in is highly skilled and highly sought after in the tech world so losing them isn't really an option.


Part of growing people's careers is asking for feedback and then acting upon that feedback. At Mozilla we have a process of setting goals and then measuring the impact of those goals on others. The others part is quite broad, from team mates to fellow paid contributors to unpaid contributors and the wider community. As a manager I need to give feedback on how I think they are doing. Personally I reach out to people who might be interacting with people I manage and get their thoughts.

But I don't stop at asking for feedback for the people I manage, I also ask for feedback about how I am doing from the people I manage. If you are a manager and never done this I highly recommend doing it. What you can learn about yourself in a 360 review is quite eye opening. You need to set ground rules like the feedback is private and confidential AND MUST NOT influence how you interact with that person in the future. Criticism is hard to receive but a manager is expected to be the adult in the room and if you don't act that way you head to the realm of a bad manager, a place you don't want to be.

So... do I still want to be a manager?

Definitely! Being a manager is hard work, let's not kid ourselves but seeing your team succeed and the joy on people's faces when they succeed is amazing.

Don't write "Five Hidden Costs of X" but when you do I will reply

Posted By The Automated Tester

Recently I was shown that Telerik did a "Five Hidden Costs of Selenium". I knew straight away from the title that this was purely a marketing document targeting teams with little to no automation skills to do automation. For what it is worth, if you want to do automation you should really hire the right engineers for the job.

My offence with the article is not that its wrong, there are a few items I disagree with which are documented below, but with it trying to sell snake oil or silver bullets. So let's even up the argument a bit. Note I am only comparing the WebDriver parts since if it were purely Selenium IDE vs Teleriks tool then I think it would be fair comments.

No Build/Scheduling Server

Telerik say we don't have those items and we don't. We don't want to be working on them since there are some awesome open source products with many years worth of engineering effort in them that you can use. These are free and allow a great amount freedom of customisation. They also work really well if you have hybrid systems as part of your test. Have you seen that ThoughtWorks has open sourced Go which is a great product from people who have been doing continuous integration for nearly, if not more, than a decade. Don't want to host it yourself, because managing servers is a hidden cost in all worlds, then look at the huge amount of Continuous Integration as a Service companies out there.

Execution Agents/Parallel Running

It says this is a 3rd party plugin which is not true. The Selenium server has a remote server system built in and if the correct arguments are passed in it can become a grid with a central hub managing which nodes are being used. This is called Selenium Grid.

The one thing, from the documentation, is that you have to host all these nodes yourself. Does it create hermetic environments to run against when scheduling? Hermetic environments is something that each core developer would want and if we can't give it then its not worth releasing. There are Infrastructure as a Service companies that WebDriver tests can be hooked up to so you don't need to maintain all the infrastructure yourself. The infrastructure costs can be quite expensive if you're a smallish team, using a service here can help. Unfortunately Telerik don't offer execute nodes as a service so you'll have to manage them yourself.

Also, its fine that nUnit doesn't support parallel execution, get your scheduling server to run a task for each browser and these tasks could be run in parallel.


This is best done by the continuous integration server as part of the build and test. These take the output from the tasks they have told them to execute and then report. Having this as a selling point in marketing documentation feels like it is just targeting the untrained staff.

Multi-Browser Support

This is where you would think we would be even but Telerik is stuck to desktop browsers. WebDriver, due to its flexible transport mechanism (it's like we thought about it or something, means anyone can implement the server side and control a browser and all languages just work. We recently saw that with Blackberry creating an implementation for their devices. We have Selendroid for Android and iOS-Driver for iOS. Mobile is the future so only supporting the major desktop browsers is going to limit your future greatly.

Jim also mentioned that you would need to build a factory and teach it to get things running against multiple projects. You do have to but here is a link to the Selenium projects' way of handling it. We need to run our tests in multiple environments and we do it pretty well. This is just a one time sunk cost.

Maintenance of tests

I might be pulling a Telerik here but having tests look like the following lolwat? What is SENDSELECTEDDiv even?.

Being able to code as an automation engineer is crucial. Being able to write good tests is useful too! I am biased but Mozilla has some really good examples that show good maintainable tests. Tests are trivial to write and update since they invested in a good pattern for tests. Record and playback tools have never had the ability to write maintainable tests and for them to be using meaningful API names. Also, it hampers (as in no one will take you seriously) your career calling yourself an automation engineer and only using record and playback tools.

Now beware of the snake oil that is being offered by vendors and for all that is holy... if you want to do automation don't do record and replay. I, and my peers, will not even let you past a phone screen if you don't show enough knowledge about coding and automation.

Also if Telerik was thinking straight they would be wrapping webdriver and then they would get everything that is happening in the webdriver world. Knowing your tests will always work in the browser no matter what platform (including mobile) is a huge selling point. And its standards based, feels like a no-brainer but i am obviously biased.

WebDriver Face To Face - February 2014

Posted By The Automated Tester

This week saw the latest WebDriver F2F to work on the specification. We held the meeting at the Mozilla San Francisco office.

The agenda for the meeting was placed, as usual, on the W3 Wiki. We had quite a lot to discuss and, as always, was a very productive meeting.

The meeting notes are available for Tuesday and Wednesday. The most notable items are;

  • Changing switchToFrame to only accept a WebElement or Index
  • Adding switchToParentFrame to the API
  • Potential changes to the way we do clicks on elements larger
  • Numerous bugs in the spec
  • Removing findElement(By.className and By.Id)
  • Landed the User Interactions spec and created an endpoint for batched actions

The other amazing things that happened was we had Blackberry join the working group, especially after their announcement saying they have created an implementation.

And... how can I forget about this...

The specification is getting a lot of attention from the people that we need and want which makes me really excited!

ViziCities release roundup

Posted By Rawkes

Last Saturday evening, ViziCities was finally released to the public as an open-source project. Ever since then things have been absolutely crazy and incredibly hard to keep up with.

Have an idea for ViziCities, or just want to get in touch about it? Email us at and we'll get back to you as soon as we can.


Let's start with a roundup of the raw statistics from the past week. They may not mean much to you but I think they're interesting and it can't hurt to share them and use them to track progress in the future.


  • Launch email sent to 4,528 people
    • 58.6% open rate (industry average is 19.6%)
    • 5,647 total opens (includes multiple opens per person)
    • 32.5% click rate (industry average is 2.6%)
    • 2,521 total clicks (includes multiple clicks per person)
    • 38 bounces
    • 43 people unsubscribed
  • Subscriber count at beginning of week: 4,528
  • Subscriber count at end of week: 6,422

Google Analytics

  • 8,985 unique visits to the ViziCities demo over the week
    • 65% using Chrome
    • 17% using Firefox
    • 9% using Safari
    • 4% using IE
  • 10,042 unique visits to over the week
    • 59% using Chrome
    • 15% using Safari
    • 14% using Firefox
    • 7% using IE


  • 874 stars
  • 96 watching
  • 74 forks


The Next Web

On Tuesday evening, The Next Web posted an article about ViziCities on their Insider channel.

ViziCities is one to watch for sure.

Of all the coverage this week, The Next Web was the only one to trigger a huge influx of tweets related to the project.

Daily Mail

Wednesday morning saw the Daily Mail run a piece about ViziCities on their website. For those who don't know, the Daily Mail website currently has a daily unique readership of around 11.8 million people. In other words, anything put there gets read by a shit-load of people.

A pair of developers from London used open source data to build an interactive 3D map of the tube network, complete with moving trains. The visualisation was built to showcase the ViziCities project. Its creators have made the code behind the project available for anyone to use.

We enjoyed a huge amount of traffic and interest from the Daily Mail audience, helped in part by ViziCities being featured as the headline article on the front page for most of the evening. I've no idea how we managed to get on the front page for so long, though I'm certainly not complaining!

Of all the coverage this week, the Daily Mail has been the most surreal and the one that has provided the most follow-ups. It's also been the only coverage this week that presented ViziCities in a way that the general public will be able to understand and take interest in.

The flip-side to being featured by the Daily Mail is that you have to endure a particular section of their readership, who provided an eloquent commentary on the project (displayed unedited for your pleasure); such as:

well that's 2 minutesof my life I'll never get back :(

And this thought-provoking statement…

My hardworked taxes towards yet another frivolous vanity project. How is thins gooing to benefit ME?!!? The only help I need I can get from the big brain between my ears.

I'm afraid we can't divulge where we spent all your hard-earned taxes, sorry. I wish we knew!

This was another…

WOW this is amazing, in actual 3D! trains are soooo interesting!! i could just watch them all day..... NOT

It really did seem like everyone loved ViziCities…

What a silly and pointless thing to create

It's a real eye-opener of a project…

Wow i just fell asleep there watching it. It's not very exciting is it?

And my personal favourite…

For a moment I thought wow that's interesting, but its passed now.

We aim to please. I'm just glad we were able to achieve that!

Seriously though, we're incredibly pleased to have been given the opportunity to be featured on the Daily Mail. 99.9% of the response has been positive and we've been absolutely astounded by it.

One of the weirdest moments from the Daily Mail coverage was when the daughter of a long-time family friend and old neighbour got in touch with me on Facebook for the first time ever, letting me know that her mum had seen the article and had phoned her about it. That alone proved to me the value of this coverage, that friends and members of the public who don't follow technology were finding it and actually reading it — all because it was on the Daily Mail. Insane!


On Thursday, ITV News ran a piece on ViziCities on their website.

It's a new and exciting way of looking at the London Underground. For the past year Peter Smart and Robin Hawkes have been working on a 3D map that brings cities to life using the power of open data and the Web.

We were hugely excited about being featured by ITV as they're a huge media organisation within the UK. Interestingly, we received next to no traffic as a result. This was likely because the article didn't have any links back to the project for a while, but even then it didn't seem to do much.

Our experience with the ITV coverage taught us that just because you're on a big website with a large audience doesn't mean anything. In a way it's analogous to the number of Twitter followers not being representative of how many people will click links in tweets you post.


For a good few days we were on top of the JavaScript sub-Reddit, and even enjoyed a good response in other sub-Reddits.

HTML5 Weekly #125

A WebGL-powered 3D city and data visualization platform. It’s flexible in its operation but can do things like let you visualize the London Underground train network in real time.

Web Design Weekly #127

ViziCities is a 3D city and data visualisation platform, powered by WebGL. Its purpose is to change the way you look at cities and the data contained within them. It is the brainchild of Robin Hawkes and Peter Smart.

Codrops Collective #104

ViziCities is a 3D city and data visualization platform powered by WebGL. A fantastic project by Robin Hawkes and Peter Smart.


We've been overwhelmed with the amount of people who are excited about ViziCities and want to see it succeed. It's actually a little unbelievable.

Since launch there have been 3 merged pull requests from the community and a significant fork looking at adding physics to ViziCities. Seeing people actually want to help out is incredibly weird.

Another thing that's taken us by surprise is the amount of people sending in screenshots of ViziCities showing their local area.

View of Toronto within ViziCitiesView of Toronto within ViziCities

View of Amsterdam within ViziCitiesView of Amsterdam within ViziCities

View of NYC within ViziCitiesView of NYC within ViziCities

View of Portland within ViziCitiesView of Portland within ViziCities

Also, the number of organisations and data providers reaching out to us about visualising their data in the project has been amazing. We already have the future of some key features confirmed thanks to people like Plane Finder and Network Rail. We've so much more to catch up on!

What's next?

I think it's safe to say that the first week of ViziCities going open-source has been amazing and overwhelming, but we're not going to stop there. Now that the project is finally out in the open we're able to focus on refining things and getting them working as best as we can, ideally with help from the community.

Even in the past 7 days I've been able to hack together experimental support for 3D terrain and live, 3D air traffic.

Watch this space…

Have an idea for ViziCities, or just want to get in touch about it? Email us at and we'll get back to you as soon as we can.

ViziCities released as open-source

Posted By Rawkes

That's right, ViziCities has been released as an open-source project on GitHub.

It's been a long time since ViziCities first started, about a year in fact! It's been an exciting journey, one which Peter and I are starting to see the results of. We're proud to finally be able to say that it's available for you to download and fork on GitHub. Enjoy!

We've also put together a pre-built demo for you to play with.

View of the Thames within ViziCitiesView of the Thames within ViziCities

As things stands, the current release includes:

  • Buildings, water (rivers, canals, etc), and green areas (parks, grass, forest, etc)
  • Dynamic data loading using the OpenStreetMap Overpass API (literally the entire world)
  • Accurate heights based on OpenStreetMap tags, if available
  • Caching of loaded data to prevent duplicated requests
  • Processing of geographic features into 3D objects using Web Workers
  • Controls (zoom, pan and orbit)

What you see on GitHub is at a very early stage and things may break, so it's definitely not ready for production use. However, download it, build it and have a play around - you can move around anywhere in the world (it's pretty cool). We'd love to hear what you think. We'd love it even more if you helped us build it!

Here are some interesting facts from the past year:

  • We were the first in the world to build a live visualisation of the London Underground in 3D, as well as the London bus network, which we demoed to Transport for London
  • We've given talks about the project at lots of events
  • We got to meet up with large and important organisations around the world to talk about how ViziCities can help people understand cities and the data that lies within them
  • The codebase has been completely re-written as a modular web application
  • And most importantly, we've had over 4,500 lovely people sign up to hear more about the project!

We've been constantly blown away by the response the project has received. We never imagined quite how much it would inspire people, and we're just happy to finally be able to show you something.

This is just the beginning. Now the project is public it will continue to be actively developed on and improvements will appear regularly. We've still got huge plans for ViziCites, plans which we've barely begun to set in motion.

If you have any questions, reach us at and we'll get back to you.

I'll leave you with a couple of screenshots that others have taken on their journey within ViziCities. I'd love to see more!

View of Toronto within ViziCitiesView of Toronto within ViziCities

View of Amsterdam within ViziCitiesView of Amsterdam within ViziCities

Updated BrowserMob Proxy Python Documentation

Posted By The Automated Tester

After much neglect I have finally put some effort into writing some documentation for the BrowserMob Proxy Python bindings however I am sure that they can be a lot better!

To do this I am going to need your help! If there are specific pieces of information, or examples, that you would like in the documentation please either raise an issue or even better create a pull request with the information, or example, that you would like to see in the documentation.

2013: All change

Posted By Rawkes

This time last year I rounded up my 2012. I wrote that it was the craziest year of my life so far, and one that I won't forget. I obviously had no idea what 2013 had in store for me!

All change

2013 was the year for massive change in my life, in every aspect. The funny thing is that none of the changes were planned or even desired, yet I'm glad each and every one happened as they taught me valuable lessons that I'll remember forever.

Suffering burnout and leaving Mozilla

Back in December 2012 I announced my intentions to leave my role as a Technical Evangelist at Mozilla, arguably my dream job. I wrote about the reasoning behind this in a lot of detail. In short, I suffered a combination of burnout and a catastrophic demotivation in what Mozilla was doing and what my role was there.

As of the 25th of January 2013 I was no longer a Mozilla employee, something I never imagined myself saying.

Lessons learnt: Even the most perfect things in life have downsides and if you ignore them you'll pay for it in the long run. Focus on keeping yourself happy, otherwise you'll cause yourself long-term damage.

Ending a 4-year relationship

After 4 years together, Lizzy and I decided it was time to call it a day. It's quite surreal uttering a few disembodied words and subsequently ending a relationship with someone you've spent practically every day with for what feels like forever.

Yes, of course it sucked. But no, I don't regret it and it was definitely the right decision for the both of us. What I was most surprised about was how we both handled it; from sitting down, talking for a while, and deciding that we should split up, to sorting everything out afterward so neither of us got left in a difficult situation.

2013 has shown us both that we're better off because of it. That makes me happy.

Lessons learnt: Good things sometimes come to an end, and that isn't necessarily a bad thing. Remember the good times and make sure you learn something from the experience. Things do get better in time, you just need to make sure you allow them to.

Taking time off to recover and find my way

Part of my decision to leave Mozilla was to take some time off to rest and work out what I wanted from life. What was planned to be 6 months out ended up turning into near-enough 10 months without firm commitment or income. I'm sure glad I saved while I was at Mozilla!

1 year on and, although much better, I'm still suffering from the burnout that I experienced while at Mozilla. Someday it'll fade into the background, though I've resided myself to the fact that it won't be any time soon, nor an obvious transition.

Lessons learnt: Take time off when you get the chance, it's a great healer. Don't force yourself to do something, it's nearly more valuable to do nothing and let your body tell you what it needs.

Appreciating close friends

One of the general lessons I learnt from 2012 was that I had neglected my close friends. Part of 2013 was about fixing that, at least in a small way.

Over the year I spent much more time with my best friend of 10+ years, helping her through a tough and similar career problem, and generally looking out for each other. Although I don't need to see or talk to her often (the sign of a great friend), doing so has shown me how valuable my close friends are to me and how important that are to my happiness.

March was pretty special as I travelled to Wales to visit Matt and Sophie, two friends that I've known online for a long time but had never met before. I had a great time exploring their part of the world and being taught how to ride on Anubis, one of their (three!) horses.

Lessons learnt: Don't neglect your close friends. It sucks, and you'll look and feel like a dick because of it. You may not make things perfect but at least try your best to make an effort.

Creating ViziCities

Soon after leaving Mozilla, Peter Smart and I decided to work on a seemingly innocuous project called ViziCities. Oh how naive we were!


Nearly a year to the day since we first started ViziCities, we've come a hell of a long way and have learnt a huge amount about the power of a good idea, especially when implemented well. So much happened but I'll try my best to round up the key events:

  • The hugely positive response to our announcement of ViziCities took us both by surprise
  • We spoke to a large number of influential organisations that we never would have imagined talking to previously
  • We decided to try and turn ViziCities into a business
  • We took part in Mozilla's WebFWD accelerator program, including travelling to San Francisco!
  • We explored avenues of funding and learnt a huge amount along the way
  • We got to talk about ViziCities at large and influential events

As it stands, ViziCities is still being worked on, just at a slower pace due to Peter and I both recovering from big changes and events in our lives. It's too good an idea to let slide.

Lessons learnt: When you have a good idea, run with it. Don't ruin a good thing by taking it too seriously too quickly. Release early and worry about making it perfect later. Working on something purely for fun and the good of others is an incredible motivator, don't squander it.

Doing things as me-me, not Mozilla-me

Something I didn't really consider when leaving Mozilla was that I would need to do everything under my own steam from that point. No Mozilla to hide behind. No Mozilla to help me get events to speak at. No Mozilla to help with travel. Just little old me.

Fortunately I haven't found the desire to do much speaking of late, though I've had some great opportunities this year. Taking part in the real-time Web panel at EDGE in NYC is a particular favourite, as is speaking at FOWA in London toward the end of the year. Going on the stage as Robin Hawkes (of no affiliation) was quite a liberating and scary experience.

Most recently, I got to attend Mozilla Festival in London. This was lovely because I got to catch up with some of my old Mozilla friends and show them ViziCities. I don't think I've ever said so much to so many people in such a short period of time.

Lessons learnt: Doing things on your own is scary but ultimately liberating and rewarding. Don't shun opportunities to do things as you. Stay grounded — it's what you're talking about that's the important thing, not you.

Moving back to London

After spending what feels like most of my adult life living in Bournemouth, recent events gave me the perfect opportunity to up sticks and move back to London. I never thought I'd move back to London, or any city, but after travelling the world I can safely say that London is by far my favourite city in the world. There's something special about it.

I originally moved back with family, which was meant to be temporary but ended up lasting for about 4 months while I sorted my shit out. While slightly embarrassing as a 27-year-old, it was really nice to spend more time with my Dad.

Just before the end of the year I finally got the opportunity to move out and get my independence back. I absolutely hate moving but it's totally been worth it. Having a place that I can call my own is immensely good for my happiness and wellbeing. It also helps that the place I live is beautiful (right alongside the Thames).

Lessons learnt: It's easy to settle and get scared of change. Not having independence can really, really suck. Finding and moving house is incredibly stressful, but worth it in the long run. Make sure you find somewhere that makes you happy, even if it costs a little extra.

Changing my name

In October I decided to do something that I've wanted to do for a very long time; I changed my name back to the one I was born with.

My name in fullMy name in full

For near-enough 16 years I've referred to myself as Rob Hawkes, completely shunning my full name because of bullying earlier in life and general habit since then. Changing back to Robin Hawkes has quite literally felt like a weight has been lifted from my shoulders. I feel like the real me is back again.

The feedback since changing my name has quite honestly been overwhelming; I never knew how much my friends preferred my birth-name!

Lessons learnt: Personal identity is something I should have taken more seriously. It doesn't matter how well you pronounce 'Robin', people in Starbucks will still write 'Robert'.

Joining Pusher

In November I decided that it was time to get serious and get a proper job, in a real office and everything. A few weeks later I started at Pusher as Head of Developer Relations.

It's been a couple months now and I've absolutely loved my time here so far — everyone at Pusher is amazingly friendly and good at what they do. I'm looking forward to working somewhere and actually making a difference.

Lesson learnt: Sometimes the obvious solution is the best one. Commuting can suck, but it can be solved (by moving house). Being part of something small(er) is an awesome feeling.

Looking after myself and listening to my body

In general, this year has taught me to look after myself and listen to what my body is trying to tell me (it's usually right).

Some of the things I've done this year include:

  • Better exercise, like regular cycling and walking
  • Better eating, as in regular and proper meals
  • Better sleep, earlier and for enough hours
  • Being strict about time off and relaxation
  • Doing things that make me happier
  • Accepting that burnout isn't going away any time soon
  • Taking lots of photos of gorgeous sunsets (great for the soul)

All in all, I feel much better about myself after doing these relatively minor things. Even just the regular exercise has had a huge effect on my energy levels, which I've noticed decreasing since I stopped cycling recently (due to winter and moving house).

Lessons learnt: Routine is easy, until you stop it. You don't have to cook crazy meals to eat well. Getting a good night's sleep is so, so important. Taking time off seriously will pay off in the long run. Burnout sucks.

Reflecting on my 2013 wishes

At the beginning of last year I outlined a few wishes for the following 12 months. Let's see how they did…

Position Rawkes as a viable channel for technology and development content

My wish…

Right now, Rawkes is purely a personal blog containing information about me and the things that I'm currently thinking about. In 2013 I want to explore the idea of turning Rawkes into a much larger content platform, revolving around technology and development. I'd also like to see more authors contributing to Rawkes.

Fail. I completely neglected Rawkes in 2013, mostly due to taking time off and working on ViziCities.

Find a way to earn enough to fund my personal projects and experimentation

My wish…

The next 6 months are going to be spent working out how to fund time to continue experimenting and learning new things. I'm keen to find a way to do this without the act of earning money being my primary focus.

Half-success. I didn't need to fund myself in the end as I sustained myself long enough on savings to last me until I got a proper job. I still got to work on my personal projects.

Work on a project interesting to the general public

My wish…

More often than not, the projects I work on are targeted mainly to the developer community. I'm keen to work on some projects this year that are also of use to the general public, or at least to the wider Internet community.

Massive success. While accidental, ViziCities proved to be incredibly valuable to the general public!

Wishes for 2014

I don't do resolutions so instead here are my overall wishes for the coming year.

Release ViziCities to the public

Even if not the full feature-set that we envisaged, I want to make sure ViziCities can actually be used by people. It's too good an idea to let it sit and gather dust.

Write another book

While perhaps not completely up to me, I think I'm ready to write another book.

Stay happy

I've done enough crazy stuff in the past few years to last me for a while. I'd be happy for 2014 to be spent taking things relatively easy and keeping myself happy instead.

Interesting facts about 2013

As always, I'll end this round-up with a few facts and figures about the previous year with comparisons to last year in grey.

What about you?

I find these retrospectives incredibly useful to me personally, especially when you have a few years-worth to look back on. I'd recommend you try doing them! So how was your 2013? Post it online and send me the link on Twitter.

SuMo Mobile Meeting 20th November 2013

Posted By satdavuk

SuMo Mobile Meeting (Held every Wednesday) Estimated time: 25 minutes – 1 Hour #sumo (IRC) is where links will be transmitted between each other.

You need to have Vidyo in order to join. Join link: Please enter guest name and then click join.

To call in: +1 650 903 0800, x92 9213 +1 800 707 2533, pin 369 – conf 9213

Previous Minutes (link will be added after each meeting) Posted recording on Youtube of this Meeting:

Present at meeting (enter your name here): michelle luna, roland tanglao, ralph daub, patrick mcclard, rachel mcguigan ____________________________________________________________________________ Previous Action Items:



Open issues:

Bug 890974 – ZTE Open does not keep Wifi on consistently

People are able to reproduce the issue on the production phone

bug 928256 1.0.1 to 1.1 update issue.

Closed issues :

Bug 936793 – [Dialer] Call log migration error from 1.0 to 1.1: “TypeError: number is null”

patch has been made

informed TAMs to let ZTE/Alcatel know about this critical bug

Firefox OS or Android news

v1.1 rollouts

feedback from parents! Ralph & Michelle’s mom have their first smartphones, FFOS! FTW.

keyboard is the biggest trouble for both moms

adding vibration and sound feedback

facebook pictures sharing is great

B2G/ Firefox OS:

scale & differentiate

Feedback for Firefox OS

feedback is doubling! 1000 pieces of feedback in 11days (from the monthly statistics). pt-br feedback is increasing, Spanish is tops!

no feedback from Latam about TCL issues

less English feedback, we’ll look into it

Firefox for Android:

FF25 is cool; FF26 english docs done (thanks Moichael), FF27 little or no KB article edits, please double check the following etherpad:

Moar partnerships in 2014 To Be announced in 2014 no idea when :-)

Anything else:

Escalation active this week – [Dialer] Call log migration error from 1.0 to 1.1: “TypeError: number is null” – [system update] Alcatel onetouch Fire to version 1.1 failed Poland – [TCL] [Updates] – updating to v1.1 makes device reboot every few secondssu

Still very difficult to find phones in many parts of Brazil, and many stores still don’t have the proper marketing.

Marcia Zikan is working with a team in Brazil to make sure the phones are available and that the stores have the proper banners and advertising.

Action items for next week:

TPAC 2013 - WebDriver Face To Face and more

Posted By The Automated Tester

Last week I was at W3 TPAC for week of face to face meeting to discuss WebDriver and other W3C specifications that other working groups are working on.

Our initial agenda went up just before the meeting and we were lucky enough to get through all the items. If you would like to read the notes for the meeting (Monday and Tuesday.

Highlights from the meeting are

  • We are rescoping the specification to have the relevant things we need - we have been suffering from scope creep so we are removing things that aren't complete or won't be completed soon. We will be creating WebDriver "Level 2" specification which those features will go into.
  • We have roughly 36 weeks worth of work to get the spec and tests done. - I will be writing details on how anyone can help make this time line realistic so we hit all of our milestones!
  • We are going to be adding an API for when an element is intractable via mouse/touch, we haven't decided on a name yet but will be in the spec soon
  • Simon and I were speaking to other editors/working groups to see if we could offload definitions of things to them. It seemed to be positive, let's see where things head.

There are other actions from the meeting that need to be done but I think the items above cover the main points, at least for me, that came out of the meeting.

I found the rest of the week actually really useful from a networking perspective and from a learning perspective. I have a lot of changes that I need to put into the WebDriver spec and have been getting feed back about when i am doing it wrong which is great!

Test The WebForward - Shenzhen, China

Posted By The Automated Tester

Just over a week ago I was in Shenzhen, China helping with Test The Web Forward. A great initiative to get anyone and everyone to write tests for browser companies to use. The tests are conformance tests for W3C.

In the next couple of weeks I am going to document what is needed to help out and how you, yes you, can help! The WebDriver open source test suite needs to be checked against the WebDriver Specification and then ported over or bugs raised against the spec or implementations. There will be bugs which is a great thing!

P.S. We had 4 patches with ~10 tests from people who had never used WebDriver so I consider that a great success. We also noticed a number of pain points we need to fix before we get more people involved

Joining Pusher as Head of Developer Relations

Posted By Rawkes

I'm happy to announce that I've joined Pusher as Head of Developer Relations, starting immediately (in fact, this is my second day).

Who are Pusher?

I know a lot of you will have heard of Pusher before, but here is the blurb anyway for those who haven't:

Pusher is a hosted service for quickly and easily adding realtime features into web and mobile applications. We're used for all sorts of features such as notifications, game movements, chat, live data feeds and much more. Our aim is to take away the headache of configuring realtime infrastructure so that developers can concentrate on making awesome stuff.

In short, Pusher take the pain out of building applications and websites that want to benefit from real-time data. They handle the infrastructure behind real-time data so you can focus on the important bit, creating your application.

They're a lovely bunch of people and I'm excited to work with them, get to know them, and make awesome things happen with them.

What will you be doing?

In general, I will be in charge of providing oversight and direction for Pusher's developer-facing activities. This includes things such as:

  • Working on the overall plan for developer relations at Pusher
  • Co-ordinating with other departments to improve the developer experience Pusher-wide
  • Making sure the on-boarding process for developers is as smooth as possible
  • Handling developer-facing events
  • Optimising the developer support process where possible
  • And much more…

I'm keen to get started and implement the ideas that I've had as a result of my experiences both at Mozilla and in my day-to-day developer-related activities. I'm confident that developer relations can be improved in so many ways, not just at Pusher but everywhere.

When do you start?

I've already started. I joined on the 7th of November, 2013.

Where will you be based?

At Pusher's offices alongside a lovely canal in Hackney, London. Not far from Shoreditch.

Awesome, but what about ViziCities?

Good question. Pete and I have effectively decided to move ViziCities back to a side project and worry about making it awesome (and releasing it) rather than worrying about making it a business.

In fact, work on ViziCities has progressed significantly in the past fortnight now that the financial strain has been alleviated. On the development side, I'm in the process of moving from distributed, ad-hoc experiments to a more deliberate and structured application architecture. The results of early progress in this area have already shown that I can integrate new functionality much quicker than before. Two examples of this include:

  • Building a loading system that uses promises to notify in code when all functionality has finished loading.
  • Using Web Workers to offload the creation of 3D models to a separate OS process, meaning that you don't lock up the browser UI and the user experience stays relatively smooth. This alone has resulted in a massive improvement to the loading and rendering of many thousands of complex buildings, improved even further by the use of typed arrays and transferable objects.

Plus much more, like finishing a proof-of-concept for visualising London buses in real-time, along the real road network, in 3D.

Expect more ViziCities updates in the future as things progress further.

More soon…

I'll leave things there for now, though I'll write another update once I've settled into Pusher and have a better idea about what's going on.

In the meantime, check Pusher out and get in touch should you have any questions about my joining, Pusher in general, or the services they offer.

Here's to the next chapter!

Sumo Mobile Meeting 6th November 2013

Posted By satdavuk

Hello Mozillians


We have a meeting for Mobile and b2g for SuMo the Meeting is 10:30am PT


If you would like to take part please click on the link


or you can goto the etherpad directory

Thoughts and advice for public speaking

Posted By Rawkes

Over the past few years I've had the opportunity to speak at a number of events all around the world, taking me from a terrified newbie to a (slightly-less terrified) professional. Here are some (unordered and rough) thoughts and tips on public speaking based on my experiences.

I will keep this entry updated as I learn new things.

Taking part in a panel at EDGE NYCTaking part in a panel at EDGE NYC

  • No one is perfect, don't make that your goal
  • Focus on being happy with yourself and your own approach and style
  • Try not to compare yourself to other speakers, everyone does things differently
  • Other speakers will always look better to you than your perception of yourself (more composed, articulate, funnier, engaging, etc) – that doesn't mean you aren't any of those things
  • Public speaking is scary shit, even the professionals get nervous
  • Ask for feedback but don't take it personally or immediately apply it (some people love to be negative without being constructive, also everyone has a different opinion)
  • Try and ignore the "great talk" comments – they're nice, but not constructive and don't help you get better
  • Don't use humour as a crutch, you may get laughs but you'll dilute the real content

Try not to look as scruffy as thisTry not to look as scruffy as this

  • Find a comfortable position for a headset mic and try not to fiddle with it for the rest of the talk
  • Run the cable for headset and lapel mics down inside your top, that way you won't accidentally catch them and deafen the audience
  • Don't hold a handheld mic like a rapper, you look stupid and no one can hear you
  • Take your lanyard off before you talk, otherwise you may get it caught in the mic or your hands
  • 99.9% of people are normal and will give you their time and attention and won't be dicks
  • Given the chance, most people wouldn't want to be where you are (but wish they could)
  • You have a huge amount of power as a speaker, people will listen to you and do what you say. Use it to your advantage.
  • Stay calm and speak slowly
  • Don't be afraid to pause for breath, to calm down, or to take a sip of water
  • Related to above, make sure you have some water with you
  • For you a pause may seem like a lifetime (especially with people watching) but in reality it's only a couple of seconds
  • No one notices or cares if you pause or make a mistake
  • Don't freak out or start apologising profusely if you make a mistake, just compose yourself and continue where you left off
  • Try not to infer anything from the facial expressions and behaviour of the audience (weirdly, the most engaged people are often the ones who look most bored)
  • Have a pee just before you talk, even if you don't particularly need one

Title slide for a recent ViziCities talkTitle slide for a recent ViziCities talk

  • Keep your slides to a minimum (few words, no bullets, more images and visual triggers)
  • Write notes to expand on the minimal slides that you can keep on your presenter display
  • Don't stress about practicing, so long as you have good content and notes then you'll be fine
  • However, make sure you keep to time as it's bad practice to go over and have to rush or end early (for me I give myself around 30 seconds a slide)
  • Try not to get distracted and look at the big shiny screen behind you (trust that the slides work). Remember that you have power as a speaker and people will look where you look… you want their eyes and attention on you and not unnecessarily on your slides
  • If you're a fiddler or walker-and-talker, try bringing something to hold to prevent you looking uncomfortable
  • Bring a power cable and display adapter to the stage with you
  • Don't rely on the Internet being available or reliable (have offline demos and videos)
  • Don't live-code if you don't need to, it's cumbersome, error-prone (due to pressure of talking) and takes much longer than explaining pre-written code. If it's for a demo, use a video of you doing it (successfully) previously.
  • Backup your slides on a USB and somewhere online (trust me, this will save you one day)
  • Ask for the projector dimensions and general AV setup at the event before you make your slide deck

Demoing ViziCities at Mozilla FestivalDemoing ViziCities at Mozilla Festival

  • As much as you can, talk about things that you are interested in and excited about (even if it's a 'boring' topic) – it's impossible to be bored if the speaker is obviously interested in what they're talking about
  • You don't need a full-on story arc to your talk but have some structure with a beginning, middle and end
  • Most importantly, try to enjoy it. Not many people get to do this and it's a great experience. You'll survive and you may even enjoy it slightly
  • Use a decent remote control, ideally not Bluetooth- or WiFi-powered
  • Record your talk and watch or listen to it back afterward, this is the best way to improve as you'll notice things like nervous ticks (for me, walking or rocking back and forth – yeah, weird), or repetitive phrases (for me, "erm", "like", "so…", "interesting", etc)
  • Repeat questions back to the audience (so everyone can hear and it's clear you understood it)
  • Don't be afraid to say, "I don't know" or, "I'll get back to you on that"
  • If the answer to a question is going to be long, suggest catching up with them after the talk instead
  • Put your slides online and mention that during your talk so people can focus and not have to take notes

I'm sure that's not everything but it's a good start for now. I'll update this entry as I think of things that I've missed.

In the meantime, I'd love to hear about any tips you might have learnt while speaking in public, or perhaps some questions you've got about speaking that I might be able to answer. Grab me on Twitter and I'll do my best to help!

Robin Andrew Hawkes: The story behind a name

Posted By Rawkes

A long time ago I used to be known as Robin Andrew Hawkes. It was my full birth-name. A name that I was proud of. A name that had both history and story behind it. However, when I was a young teenager I decided to call myself Rob, just Rob. I made every effort to stop using my full first name, and made even more effort to make sure my full first and last names didn't appear together. I've always regretted this. I've always felt sad about it. I've always felt like a part of me has been missing. So why did I do this? And why haven't I changed back?

A name of 2 birds

I was actually known as Andrew Hawkes for a few days after being born. It was a name given to me in part because my dad had a friend called Andrew who had recently died in an accident. It made sense to remember him that way, and it was a nice name. It didn't last long though.

One day while I was still in hospital, a surreal event happened that had a profound effect on rest of my life. A robin, the bird, flew through the window of the ward I was in and landed delicately on the curtain rail above my cot. Aside from the nurses freaking out because a bird was in the ward, no one could quite believe it. But there it sat, a little robin looking down at little Andrew. The robin was shooed out by the nurses and from that moment on I was known as Robin Andrew Hawkes. My first name as a result of some surreal moment, my middle name as a tribute to a family friend, my last name containing the history of my ancestors. A name full of story.

A robin, by Joffley on FlickrA robin, by Joffley on Flickr

Ever since then I've had a connection to robins that I can't quite explain. I'm not a believer in a greater being, but I certainly believe that there are things that cannot be explained yet. In this case, perhaps because I'm looking out for it, I've always noticed robins. They seem to always surround me, whether living in the garden or showing up in random moments. Not just that, but when they do show up we often share a moment, some sort of connection – maybe the way the robin behaves, or just the timing of their arrival. I don't need to know the reason why that is, all I care about is that it feels special and I feel like I have something watching over me. Silly, perhaps. Especially for someone who doesn't believe in God.

Fucking batman

Up until late primary school (elementary for those outside the UK) I was Robin, the smiley happy kid who was always getting in trouble. Not for being badly behaved, more because I was mischievous and always had a grin on my face (which made lying about denying something incredibly hard).

A very young meA very young me

Toward the end of primary school we all started to work out how to wind each other up, mostly through teasing. Everyone did it, it's just a part of being a kid. The problem was, I was more ripe than most for the name teasing because not only was Robin Batman's sidekick (second best) but at least in the UK we have a term called Robin red-breast, named after the bird. Coupled with the fact my last name (Hawkes) sounds like a bird too, the teasing was guaranteed. Then Robyn happened (the singer), and everyone assumed I had a girls name. Oh the joy.

Actually, I didn't mind the teasing so much. What I did mind was when some of my 'friends' realised that it could be used to hurt me, which happened to be the same time the bullying started. I was never physically bullied but I was always bullied mentally in a variety of ways, one being via my name. At that point I was so embarrassed and annoyed at my name that I decided to drop the birds and call myself Rob. Plain and simple. You can't bully a Rob.

It worked, but in doing so I lost part of my identity. Part of what made me… me. I never quite got over that, though the resulting years certainly helped make it easier to forget about.

It hurts to call me Robert

Fast-forward 15 years or so and I've made a professional career for myself under the name Rob Hawkes. I've written a book under the name Rob Hawkes. Hell, the vast majority of people in my life know me only as Rob Hawkes. I got so used to it that I very nearly forgot about my full name and how it made me feel.

My university degreeMy university degree

It was only quite recently, within the past few years that I've started to reminisce about my name and consider changing it back. It probably started when I graduated from university and saw my name in full, looking formal and smart. I was an adult and by that point the painful memories of the name had worn away and I was left wondering why I wasn't called that any more. I didn't have an answer, at least not one any better than "Because everyone knows me as Rob, it's too late to change."

I would guess that most people wouldn't predict that my full first name is Robin. In fact, I'd bet that you would assume my full name is Robert. Everyone knows a Robert. Very few people have even heard the name Robin, let alone met someone called that. And guess what, it hurts to have people call you by a name that isn't your name. It's not me!

Back to my roots

I've always regretted calling myself Rob and hiding who I really am. Hiding the real me and the stories behind who I am. It's always felt odd to call myself Rob, perhaps even wrong, but I was used to it.

From now on I'm Robin Hawkes, a name that I'm proud of. To the outside, it's a small change. To me, it's one of the biggest decisions I've made. It defines who I am. Losing the Rob feels like losing a part of me, but gaining the Robin feels like I'm getting a part back that I've missed dearly.

I've already had friends telling me how happy they are that I've changed my name, how they always preferred the name Robin. It's really nice to get support for what feels like a massive decision, even if it seems small and silly from the outside.

Now to go about changing my name on what feels like every aspect of my adult life, both online and off. Starting with Twitter and Facebook, because you know, that obviously makes it official.

What about you?

I'm sure I'm not the only one to have a story behind my name, or a reason for changing to a shortening or nickname. Why did you change? Have you ever thought about changing it back?

HackBMTH last weekend

Posted By The Automated Tester

Last weekend there was another HackBMTH, this time hosted by Adido. This was my first time going to a hackathon in the Bournemouth area and it was a great. Jonathan Ginn and Adam Howard are the organisers and to be honest, they did a brilliant job of organising the event.

There were people from all over Bournemouth and as far away as Reading who came to work on their little projects or create new hacks. There were ops people helping people setup Apache and a few other things. There were people creating games, one of which was a tank game that took a picture from the webcam and turned that into the level. Finally a use for QR codes!

I worked on my GitHub+Travis Firefox Add-on. I worked to start pulling in the build history of Travis CI Projects to the project page on Github. Below is a screenshot of what it looks like at the moment. It's still rough around the edges.

You can install a pre-release from Github if you want play with it. Feel free to raise bugs about it on the Github page.

The only thing that I was disappointed with was that there was only 1 female. This I should point out I don't think is the fault of the organisers. When I worked in Southampton there weren't many females applying for positions so I know the area suffers from a lack of them. I know that there are some as students, having met them at B & W Meet, so I hope that more will come. After all, one of the main reasons there are not enough woman in tech is because there are not enough woman in tech, to paraphrase Sheryl Sandberg.

QA Site Development

Posted By satdavuk

I have been doing a lot for QA


I added the github link to the code


changed the footer date


added some events to it

W3C WebDriver Face To Face - June 2013

Posted By The Automated Tester

A couple weeks ago was the latest face to face of the W3C Browser Automation (WebDriver) Working Group. We as a group don't meet anywhere near enough in my opinion but when we do the conversations we have are quite amazing.

Below is the Agenda that we used for discussions. If you want to read the minutes you can read them here and here.

If you have any questions please leave a comment!

The tale of Selenium bug 141

Posted By The Automated Tester

The selenium project has for a while now wanted to create a library that allows user emulation in the browser. The project has done a reasonable job at this so far with respect to this. We check if items are in the DOM, we check the visibility of items and we normalise text from the browser, amongst our other amazing talents!

Our default position is we enforce the idea of emulation. "How would a user do this action?" is the main question we ask ourselves! So when a bug that doesn't meet that we tend not to implement it. One of these times is bug 141 - WebDriver lacks HTTP response header and status code methods. This is one of the most contentious bugs in the Selenium repository. People see Selenium as more than browser automation library, they see it as a web testing framework. Yes, Selenium RC was like this but then again Selenium RC also had ~140 methods hung off one object. We made a mistake in the past, let's try not to repeat it! Anyway, I digress, in a web testing framework world there may be times where there are use cases where you need to find out data from the response header and status codes.

So is this a feature we want to add to WebDriver? No. "But David, we do!?!?!?" I hear you cry! The use case that comes up regularly is I want to know if a server is returning a 404 or 500 when I access a page. Why is there a need to start a browser to find this out? When I asked this question on twitter, my friend and fellow committer Kevin Menard said that it could be useful to to see the page response on failure and some pages can only be accessed with a real browser. The use cases that Kevin had is you want to check that the endpoint that you originally hit might might change based on the user agent passed in or there might be a site that follows the single page idea and breaks push state and therefore most of the web semantics. I admit Kevin is right with those use cases. However its only looking at a small part of the issue and there are a lot of corner cases. The major one for me is watching XHRs. What if one of your XHR returns a 500 and you get an error? Should WebDriver show you that information as well? I am sure you can see how quickly this can escalate. If you start worrying about all requests, and you should if you are worried about the initial request to a page, then you want the browser to capture all of this information and return it in a standard format. There just happens to be a format called Http Archive (HAR).

So does the project take the task of creating a mechanism for extracting a HAR from the browser? That could be near impossible on browsers that don't readily share network usage information. So we go from WebDriver being a browser automation framework for user emulation to browser automation framework for user emulation and networking information gathering. That might be a little obtuse but its meant to be.

While Selenium's main use case is for test automation, is not solely designed for that. We want to be the best at browser automation. Proxies are really good at managing network information and reporting on it in standard formats. Examples are fiddler and browsermob-proxy. And you can even hook them into Selenium WebDriver! Like the following:

from browsermobproxy import Server server = Server("path/to/browsermob-proxy") server.start() proxy = server.create_proxy() from selenium import webdriver profile = webdriver.FirefoxProfile() profile.set_proxy(proxy.selenium_proxy()) driver = webdriver.Firefox(firefox_profile=profile) proxy.new_har("google") driver.get("") proxy.har # returns a HAR JSON blob server.stop() driver.quit()

Yes it might be a little verbose to use them but if you really want to see that type of information then you should do it properly otherwise the real errors, that you originally wanted, will go missing.

Putting in half a solution into Selenium is just going to create scope creep and a large number of bugs because its not doing the "right" thing. The "right" thing is far too subjective for it to be done correctly.

ViziCities dev diary #2: London Underground in 3D, Leap Motion, funding and more!

Posted By Rawkes

For the past few months Pete Smart and I have been working on ViziCities, a 3D city and data visualisation platform — the project is about bringing cities to life using the power of open data and the Web (think Sim City but for real-life). You can find out more about why we're doing this and how in the first dev diary.

In the first month we had created the beginnings of a powerful platform and had a lot to show for it. Let's take a look at what's been going on since then; we're confident that it's been worth the wait!

Aside: We're currently looking for funding options to secure the future development of ViziCities. Interested? Get in touch via!

Realising the potential

When we released the first dev diary back in March we didn't expect much of a response, after all this was just a personal project created by two people for fun. If only we knew back then what we know now — it went down a storm!

ViziCities plinthViziCities plinth

What has become clear to us is that this project is much more than a personal experiment. We're beginning to realise the huge potential of ViziCities, specifically around the power of visualising large quantities of data on a 3D, living city.

ViziCities SSAO demoViziCities SSAO demo

It's also clear to us that other people are extremely excited about the project and want to get involved and help us succeed. We totally didn't expect this and have been humbled by the response so far.

We've obviously hit on something important and we're keen to make sure it happens!

Exciting discussions with third parties

Since the last update we've been talking with a number of third parties who we'd like to work with to help make ViziCities better. These third parties range from important individuals, to Fortune 500 companies, to open source initiatives, all the way to large government organisations.

We've been blown away by the response when we show these guys our progress. It's also shown us is that there is clearly huge amount of interest for the project and its future potential as a data visualisation platform. In short, we're confident we're onto a winner.

I won't go into too much detail just yet, but here are a few examples of the kind of people we've been meeting with in the past few weeks:

Ordnance Survey

The national mapping agency for Great Britain. If you live in the UK then you've likely used an Ordnance Survey map or service.

Ordnance Survey HQOrdnance Survey HQ

Transport for London (TfL)

The guys who run the entire public transport network in London.

View from the TfL officesView from the TfL offices

The curators of the UK's primary open data store.

And many more!

We've been talking to a lot of people since the last update, these have been just a few of them. We're keen to talk to more people so get in touch via and we'd love to work something out.

Talking at events about our progress and experience

One of the most exciting non-development things that happened recently was the opportunity to talk about ViziCities at the Front-Trends conference in Poland. It was great fun and the community was superb!

Pete (right) and I (left) at Front-TrendsPete (right) and I (left) at Front-Trends

During the talk we covered a little about the history of the project, as well as some of the development issues we encountered, and perhaps even a little secret demo at the end. You'll have to watch the video (below) and find out for yourself…

We've got a bunch more talks in the pipeline and can't wait to show more people what we've been up to. The project is changing week on week and so we always have something new and exciting to talk about.

Please get in touch via if you think we'd fit in with your event, it's likely we'd love to come talk.

Awesome new features

We've been alluding to some very exciting features for a while now, and we're finally ready to show you some of them!

Secret ViziCities featureSecret ViziCities feature

Leap Motion

Not long after the last update we soon realised that there was huge potential for us to utilise the Leap Motion as a new way of interacting with the 3D visualisation (think Minority Report). What we didn't expect was Leap Motion reaching out to us and sending a dev kit to experiment with!

Pete and I demoing the Leap Motion support at Front-TrendsPete and I demoing the Leap Motion support at Front-Trends

It's early days and we haven't put too much time into integration yet, but check out this video for a cool demo of what we managed to cobble together in half an hour:

All in all, we're massively impressed with the Leap Motion (it's so accurate!) and reckon we can make some really intuitive interactions for people using the device.

Live data

Live data is something Pete and I are keen to explore in depth for ViziCities. One of the later experiments has been to visualise live tweets within the 3D city environment, which ended up looking really cool!

Aside: Twitter visualisations are something close to Pete and I, we love them.

We haven't finished this yet but we're aiming to make the tweets float out of the city similar to the way a balloon floats out of a child's hand and into the sky. Here's a little video of our current progress (the tweets are real):

The eagle-eyed amongst you will notice that there aren't many tweets appearing, considering London uses Twitter so much. There are 2 reasons for this; the first is that we recorded this video early on a Friday morning, the second is that we only have access to Twitter's 1% real-time stream so in reality there are many, many more tweets.

And now for the feature we've been the most excited about…

London Underground in 3D and real-time

Ever since we started the project we've wanted to visualise public transport in real-time and 3D. Why? Partly because it'll look amazing, but mostly because this is exactly the kind of thing that will help bring cities to life.

We decided that our first challenge was to build the London Underground network in 3D, with the aim to then place real-time trains along the 3D tracks. This was something we'd not seen before — anywhere — and definitely not something seen on the Web. Crazy? We thought so too, but it was worth a try.

Fortunately, we had a head-start thanks to Matthew Somerville's live (2D) tube map and the TfL APIs. However, the 3D aspect introduced a whole world of complexity to the challenge (eg. how deep are the stations?), but we were prepared to take it on!

We succeeded

We'll save the break-down of how we implemented it for another post (we learnt a lot) but it's safe to say that we got it working and it looks beautiful. Oh, and you see those icons above the tube lines? Those are trains. Real trains. Moving in real-time. In 3D…

Close-up of the London UndergroundClose-up of the London Underground

What you can see here is the result of many hours of work from Pete and I, ranging from programming time, to nearly giving up trying to understand the algorithms involved, to hours spent manually wrestling with Excel spreadsheets containing the data we needed. Let's just say that the world would be a better place if data was offered in a variety of similar, usable formats.

The London UndergroundThe London Underground

The result, if we do say so ourselves, is amazing. We can't actually believe that we did it! The cherry on top is that the London Underground in 3D looks just as good as we'd hoped.

An overview of the London Underground networkAn overview of the London Underground network

We still have a way to go but we certainly have the beginnings of a live public transport 3D visualisation platform, at least for London. From here we hope to move onto busses and other aspects of the network in London.

Adding UI interactions for live trainsAdding UI interactions for live trains

Static screenshots are so boring, so here's a video showing off our latest and greatest feature. We hope you enjoy it as much as we have!

Want to talk to us about this? Get in touch via and we'll spill as many beans as we can.

And much more!

While the London Underground in 3D is clearly the most exciting feature to date, we've also been working on many others. Such as:

  • Ambient sound via the Web Audio API
  • Better AI (video below)
  • Implementing UI and interaction

Rest assured, there is much more left to do. We've only just touched the surface of what's possible with the platform!

Help fund the future of ViziCities

With that in mind, we need your help. To continue working on ViziCities and ensure it's future success we need to start working on it full time and really give it the attention it deserves. To do that we need to live, and to live we need to earn money.

I'll keep this brief, we're now actively looking to explore financing options to fund future development. Particularly in the early stages, we're keen to explore being sponsored by like-minded organisations who also want to see a platform like ViziCities happen and succeed.

Know anyone who would be interested to support the future of the project? Get in touch via and we'll get right back!

We've been blown away by your response so far. We're making this happen and we'd love to have you as a part of it!

Finally, make sure you sign up to find out about the public beta. We'd hate for you to miss out.

Based in London and have my dream role? Tell me!

Posted By Rawkes

Just under a month ago I announced my plans to up sticks and move back to London. A lot has happened since then and I've had a lot of time to think about where I am now and where I want to be in the future.

Although slightly later than planned, I'm still moving to London. But there is more. I've also been reconsidering my current situation regarding (my self-imposed lack of) employment. While the motives behind this decision still apply, recent events in my life have caused me reconsider my options and take a different view on what's important to me right now.

The long and short of it is that I'm keen to explore the options available to me, one of which is traditional employment. I'll outline my desires around employment in a bit more detail below, though you can just go straight ahead and email me ( if you think you have something I'd enjoy!

What am I after?

If the last few years have taught me anything, it's what's important to me in life. Put simply, I've learnt that if something doesn't make me happy then I don't want to be doing it. But what does happy actually mean in this case? Perhaps a better word is worthwhile. If I don't feel like what I'm doing is worthwhile then I quickly lose interest and become unhappy. Life is too important to waste away doing things you don't enjoy.

Something else that I've learnt is that I don't like feeling constricted in what I can do. Without freedom it is too easy to feel restricted and eventually lose interest in what you're doing (because you want to do something else). Whatever I do, it needs to have freedom balanced with the necessary focus to prevent that freedom turning into a different problem (too much choice and a lack of direction). Whether that's freedom built into a role, or freedom gained from part-time employment, it's something that's very important. I don't want to give up my side projects and general tinkering.

So what am I after exactly? A few things. Here are some key words that sum up individual things that make me happy…

  • Developer engagement
  • Data visualisation
  • Prototyping
  • Demonstrations and hacks
  • Speaking
  • Freedom
  • Flexibility
  • Learning
  • New and shiny
  • Interestingness
  • Technology
  • Programming
  • Being around others

Basically, I'm keen to explore options that allow me to continue experimenting and tinkering with various technologies. Extra points for options that give me to opportunity to share that learning with other people. What I'm not interested in doing is the same thing day in day out — there has to be variety.

I'm a patient man and I'm prepared to wait for the right role to appear; I'm in no rush to simply get a job.

Got the perfect role?

Are you based in London and have a role that fits? Get in touch ( and let's talk!

Moving to London

Posted By Rawkes

Guess what? I'm moving to London!

You heard right; I'm upping sticks from my cosy life by the beach and heading back to the city that I was brought up in. Am I crazy? Maybe.

Tilt-shift from a helicopter, by yours trulyTilt-shift from a helicopter, by yours truly

What is this about?

To cut a long story short, my life has changed a whole bunch in the past few years. This year alone has seen some of the biggest decisions of my life to date; like leaving Mozilla and, most recently, (mutually) ending a 4-year relationship.

Whether good or bad (both decisions can be seen in either light), what's certain is that I now have the freedom to grab life by the balls and take it in directions I hadn't considered before.

Why am I doing this?

So why London? Well, first of all it's a city I know and love. I was brought up in Richmond and I lived there for near-enough 3/4 of my life before heading to university. I may not love the insane crowds so much but I certainly appreciate the beauty of the location.

Aside from the sentiment and history, London is a great place to be if you want to immerse yourself in the UK Web community. And as much as I may have despised a move to London in the past (the countryside is beautiful), the time has come for a change and I can't think of anywhere better for that change than London.

When will it happen?

This is still up in the air at the moment but the plan is to sort everything out within the next few weeks.

It'll likely happen in stages, starting with a temporary move to the family home in Richmond and then a more permanent move to my own place somewhere in the vicinity (South West).

How can you help?

I've been out of touch with London for a long time; so much has changed since I last lived there. I also don't know a huge amount of people there any more.

Here are some things you might be able to help out with…

  • Inviting me along to local social and industry events that I might not know about
  • Letting me know about work and contracting opportunities in the city that might tickle my fancy (R&D, experimentation, etc.)
  • Helping me out while I ask stupid questions about the city

Basically, I'll need help kick-starting the next stage in my life. I'll appreciate it!

Feel free to email me directly on or send me tweet.

Before I go…

Here's an early screenshot from a ViziCities experiment with London landmarks.

ViziCities: London LandmarksViziCities: London Landmarks

ViziCities development diary #1: One month in

Posted By Rawkes

Update: The second Developer Diary is out

Just over a month ago I announced ViziCities, the latest project from Pete Smart and myself. We're not quite ready to release it yet but make sure you sign up for the beta to be the first to use it. In the meantime, let me fill you in on what we've been up to this past month.

Note: This entry is focussed on the development side of ViziCities. Pete is working on the UI and UX side of things and we will update you on that progress separately.

What is ViziCities?

Although it's entirely obvious to Pete and myself, describing ViziCities has always been slightly difficult. This isn't because it's hard to understand, more that it's a combination of many things and we're still looking for that succinct elevator pitch.

Bringing cities to life

At the most basic level, ViziCities is about bringing cities to life using the power of the Web. At a slightly more wordy level, it's about creating an interactive 3D city visualisation platform that is beautiful, fun and engaging.

Intersection of data, art and play

The best way we've found to describe the project so far is that it sits at the intersection of data, art and play. What's great about describing it this way is that it means we get to create a Venn diagram, and who doesn't enjoy a good Venn diagram?

ViziCities: Data, Art & PlayViziCities: Data, Art & Play

Inspired by SimCity

The original concept for ViziCities was to use WebGL to replicate the data layers from the new SimCity game. There's something about visualising huge quantities of data about a city in 3D. It's sort of sexy, in a weird way.

Data layers in the latest SimCityData layers in the latest SimCity

Update: Richard Shemaka, the principle engineer at Maxis for data layers in SimCity is a fan of the project. We're gobsmacked!

A gappsite

Something we still haven't worked out is how to describe the media format that ViziCities falls under. It's not a game, yet it has many game-like features and takes a lot from game design. It's not an app, yet it acts like an app and is technically built like one. It's not a website, yet it sort of is one.

So what is it? The best we've come up with so far is that it's a gappsite. That'll do until we think of something more serious. If you hadn't guessed, we really haven't put too much thought into this yet.

Created in our spare time

Although not important to describing the project, it has turned out that a lot of people have assumed that there is a large team working on ViziCities, or that we're being paid to do this. So is there, and are we?

No. Pete and I are the only 2 working on this and we're doing it in our spare time. This is a lot easier for me, seeing as I quit Mozilla back in January to do exactly this without having to worry about income. Last month alone we worked out that I sank around 300 hours into the project!

1 month ago we had nothing

I'm still quite amazed that just over a month ago ViziCities was nothing more than a crazy idea and an empty scene set up in Three.js (a WebGL library).

ViziCities: Basic sceneViziCities: Basic scene

Not only did we have nothing created (aside from a camera that you could rotate, woo), we also had very little idea about how to turn this vision into a reality. Fortunately, the whole reason I do projects like this is to learn something new — I thrive off the fear of the unknown; that feeling you get when you find yourself out of your depth.

If only we knew how much there was to learn…

Finding the data

The biggest problem by far has been finding accurate, usable data for the locations that we plan to visualise in 3D. For the proof of concept, we've been specifically looking for data about London. Why London? I was brought up there, we both live in the UK, and we assumed it would have copious amounts of free and easily accessible data.


Sort of…

We already knew that there were plenty of free sources of data for London (and the UK); like, the London Datastore, Ordnance Survey OpenData, OpenStreetMap, the Office for National Statistics, and These gave us the bulk of what we needed (census data, crime, geographic features, etc.) however they're all in different file formats, level of detail, and often represent areas in different ways. It's a mess.

Most annoying has been finding accurate building outline and height data. I assumed this would be freely available (like in the US) but it turns out that you either need to be in full-time higher education, or have a tonne of money at your disposal. I'm no longer a student and don't have the budget to buy building data for all of London (let alone the UK), so we've had to make do with the super-simple building outlines from Ordnance Survey's VectorMap dataset. The unfortunate thing about this is, aside from the low detail, there is no height data for the buildings so we've had to come up with a method of performing an educated guess.

If you're involved in the building data side of things or know someone who is, please send an email to as we'd love to make buildings in ViziCities more accurate for London.

Working out how to use the data

The second biggest problem has been working out how to use the data we've collected and learning how to use the related tools. Neither Pete or I have any significant experience with GIS (Geographic Information System) software or the related data practices. We had a long way to go.

Reading geographic data

Fortunately, it turns out that there are a few usual formats that geographic data is provided in. In our case, we ended up using data provided in what's called a shapefile. To read this data you can use a free piece of software called QGIS, which is effectively Photoshop for geographic data analysis.


QGIS allows you to open the shapefile data and manipulate it using a slightly complicated GUI (which you do get used to). It also lets you do complex analysis, as well as importing and merging of external data. Basically, if you're doing anything serious with GIS then you'll likely end up using QGIS or the proprietary alternative, ArcGIS.

Enter PostGIS

While QGIS is great, it only gets you so far as the features I needed weren't multi-threaded, or particularly quick. It also periodically locks up whenever you're playing with a large quantity of data, like every single building in London. It's good for initial manipulation but I needed something more robust for the hardcore data manipulation.

PostGIS is the solution to this problem. It's an extension to the Postgres database format that provides a huge amount of geographic functionality, allowing for super-quick spatial analysis.

To put things in perspective, an intensive process that takes hours in QGIS (and locking it up) can potentially take just a matter of minutes in PostGIS if you do things right. It's a no brainer.

Getting help via the GIS StackExchange

Arguably the most useful source of help for me during the early development of ViziCities has been the GIS StackExchange. It's hands down the most useful source of information for common GIS problems, and even those that aren't so common.

Visualising things in WebGL

What I thought would be the most difficult step actually turned out to be one of the simplest. The beauty of geographic data is that it's commonly stored as points, lines, or polygons, which map perfectly to 2D and 3D drawing platforms like WebGL. The only thing that I needed to do was export the data into an optimised GeoJSON format and convert the geographic coordinates for each point into pixel coordinates. Simple.

Adding buildings

The most interesting aspect to me, at least at the beginning of the project, was visualising building data for cities in 3D. This hasn't really been done before in WebGL, at least not outside projects with massive development teams and budgets (like Google Maps and Nokia Maps).

To keep things simple, and because I was sure it wouldn't work, I tried outputting the centroid (centre) position for every building in a 'small' 8x8km section of London. To my amazement, it worked!

ViziCities: Building centroidsViziCities: Building centroids

However, having points for buildings is not that glamorous. What's needed instead is polygons, a seemingly complex process. The good news was that Three.js had functionality built in to construct shapes (polygons) from a collection of individual points, which was perfect considering that GeoJSON represents polygons (buildings) as a collection of points.

The result was a solid shape for every building in our section of London, all in WebGL.

ViziCities: Building outlinesViziCities: Building outlines

At this point I was gobsmacked at a) how easy this turned out to be, and b) how beautiful it was. It's one thing to visualise procedural data, but visualising real-life data and recognising it is quite another. If you squint, you can even see the outline of the Thames at the bottom.

I wasn't prepared to stop at 2D outlines. The whole point of this project is to visualise cities in 3D, so the next step was turning the polygon shapes into full-blown 3D objects. Three.js again came to the rescue with its ability to extrude 2D shapes.

ViziCities: Building objectsViziCities: Building objects

The heights are off (random values) but you can clearly see that this is some sort of urban area. If you know London well enough then you can already start to recognise some of the buildings!

What amazed me most at this stage was that everything ran at a silky-smooth 60fps, on all the devices I tested on. In my naivety, I assumed that visualising such a huge quantity of objects (many thousands) would be too much for WebGL. It turns out it isn't, and after seeing examples of Three.js rendering over a hundred thousand objects I can see that I was wrong to assume any less.

Experimenting with SSAO and tilt-shift

After the success with buildings I decided to take a stab at two visual effects that would help really make ViziCities pop; Screen Space Ambient Occlusion (SSAO), and tilt-shift.

SSAO is a rendering technique that analyses the depth buffer to work out which objects are occluding (overlapping) others and applies a faux-shading effect around the edges of objects to give them definition. This sort of effect is sometimes referred to as clay rendering.

When done wrong, the results look pretty appalling…

ViziCities: SSAO failViziCities: SSAO fail

But when refined and combined with better lighting, the result can make an entire 3D scene pop out and bite you in the face. It's a beautiful effect that adds a whole element of realism to the scene.

ViziCities: SSAO successViziCities: SSAO success

In the example above you can also see the tilt-shift effect that we applied alongside SSAO to give the feeling of miniaturisation. It's a similar effect to the one SimCity used in its latest game and it's commonly used in photography to make urban spaces look small and toy-like. There's something about the tilt-shift effect that we absolutely love.

Adding natural features

Although buildings are the lifeblood of any legitimate city, there are many natural features that are needed to complete it.

It was at this point that I took the Ordnance Survey and OpenStreetMap data and merged together a selection of common natural features for London; particularly the river Thames, bodies of water, fields, and large areas of trees.

The results speak for themselve…

ViziCities: Combined featuresViziCities: Combined features

It's actually a very minor visual addition to the scene but the river alone creates a feeling of context that allows you to get a much better idea about where in London this actually is. For those who are interested, it's the area North of the Thames between the Houses of Parliament and the O2 Arena (aka. Millennium Dome), the latter of which you can see in the bottom right-hand corner (in a rather unflattering level of detail).

Make no mistake. At this point we knew that this was no longer just a 3D model, this was beginning to become London.

Adding roads

Another feature that people recognise about cities is roads; after all, that's what most people use traditional maps for. They're an urban feature that we knew from the very beginning that we had to include — you just can't have a city without roads.

To begin with I took a series of road 'nodes' (points) from Ordnance Survey that described every junction in London. From here I outputted them in a basic scene similar to the initial building scene.

ViziCities: Road pointsViziCities: Road points

Although unconnected, you can already see the roads beginning to appear, as well as geographic features like the river.

Using lines as roads

By connecting the nodes we started to get a better idea about what the roads could look like, although it was far from perfect.

ViziCities: Road linesViziCities: Road lines

You'll notice that there are a whole bunch of gaps between the roads. In hindsight, I now know why that is (using junctions is not the right way to do this) however there is also another problem in that the line width is kept at all zoom levels. Things look a little unrealistic when you have really thin lines when you zoom all the way in, or really thick lines when you zoom out. We could have used ribbons in Three.js but we instead decided to explore other options for drawing roads.

Using voids to infer roads

One of these alternative options was to output the spaces between roads as polygons (effectively, city blocks) and infer the position of roads by the void left between these polygons. This approach allowed us to create a decent representation of roads…

ViziCities: Road outlinesViziCities: Road outlines

It wasn't perfect though, as can be seen by the huge expanse of solid green in the top right-hand corner. This is because the approach I'm taking doesn't include every single tiny road in London, just the larger A and B roads. The unfortunate effect of this is that there are sometimes small gaps where a smaller road might be, meaning that the algorithm used to calculate city blocks can overlook legitimate areas that should be a road.

Simplifying the void approach

Instead, the final approach I went for was to expand the building outlines to create a sort of pavement effect. The void left between these expanded building outlines had a similar effect of looking like roads, albeit slightly less detailed than the previous approach.

ViziCities: Road building outlinesViziCities: Road building outlines

Though not perfect, this approach was good enough for our needs and when combined with natural features it did create an effect that looked like roads between buildings.

ViziCities: Road buildingsViziCities: Road buildings

We're not ruling out another look at using ribbons as roads, as this would certainly be the 'perfect' solution. For now though, this will do.

Adding data layers

Visualising cities, at least the visible aspect of them, is only part of what the project is about. A secondary focus of the project is the visualisation of data that, when combined with the buildings and natural features, creates a whole new level of context and exploration.

Bar charts

We could have started anywhere with data, so the decision was made to grab the first usable data source that we could find (population density) and create a naive bar-chart effect to see what it looked like.

ViziCities: Data layer barsViziCities: Data layer bars

As you can see, it's a bit crazy. It's sort of like those toys you can get which are full of pins that people can't stop pressing their faces into.

To simplify things I only outputted bars at what are known as Lower Layer Super Output Areas (LSOAs — no idea where the second L went). LSOAs are commonly used in the UK for census data and other data that needs to describe detailed geographic areas smaller than boroughs and neighbourhoods, though not as detailed as per-street.

ViziCities: Data layer barsViziCities: Data layer bars

As you can see, I also added a rough colour scale to get an idea of low and high values.

Without the city underneath it's kind of hard to get any context, so that's what was added next.

ViziCities: Data layer bars & buildingsViziCities: Data layer bars & buildings

It was at this point that things were beginning to look and feel like the data layers from SimCity that inspired us so much.

Something was missing though.


ViziCities: Data layer choropleth & buildingsViziCities: Data layer choropleth & buildings

The first iteration of heatmaps used the LSOA outlines and coloured each one based on it's population density value. As you can see, the area in the top-right corner is very light as no one likes living in parks or marshland.

Couple with bars, the resulting data layer was a sight to behold!

ViziCities: Data layer choropleth, bars, and buildingsViziCities: Data layer choropleth, bars, and buildings

We could do better though, with the data laid over the top of the city (you can just about see the buildings through it) it was hard to grasp the geographic context.

To solve this I added borough outlines.

ViziCities: Data boroughsViziCities: Data boroughs

The result, as I'm sure you'll agree, is much better than before. It's now practically impossible to mis-read the data from a geographic context, plus it actually allows you to start comparing different, known locations around London. Things were getting interesting.

Most recently, we experimented with slightly more obvious borough outlines and a hexagon grid for the heatmap rather that LSOA outlines.

ViziCities: Data hexagonsViziCities: Data hexagons

I'm a big fan of the hexagon grid; I think it looks neat (in all senses of the word). It reminds me a little of games like Risk.

Experimenting with AI

Unsatisfied with tackling buildings, natural features, roads, and data; the next challenge was to implement AI. Specifically, cars driving along the roads with no prior instruction.

There are a million and one ways to tackle this problem depending on the level of detail you want. We wanted something simple so we kept things basic. The approach that we went for was to create a lookup table that represented every single road segment in the city. From there, we could create AI that simply moved from segment to segment, turning at segments that were connected to more than one segment (junctions).

The result was simply jaw-dropping, if I do say so myself.

As always, it isn't perfect (the AI don't move at a set speed) but it's a fantastic starting point that we're looking forward to expanding out in the near future.

When coupled with the 3D buildings and natural features, the AI really does make the city look like it's alive.

ViziCities: AIViziCities: AI

I can't wait to refine what we have and add the other AI ideas that we've got planned!

Expanding London

At this point we decided that we'd learnt enough about this small area of London and wanted to expand.

ViziCities: Expanding LondonViziCities: Expanding London

Compared to the original section of London (in green) the new, larger section was a massive upgrade. When everything was added, data and all, the effect was quite beautiful.

ViziCities: Expanding LondonViziCities: Expanding London

It really looked like London, the shape and everything.

Improving performance

Unfortunately, in our efforts for global domination we quickly discovered the performance limitations of the project as it stood, as well as my personal knowledge with Three.js and WebGL. In short, the expanded version of London ran at a horrible 20–40fps depending on the system you viewed it on.

To solve this, I spent a good week purely focussed on learning and implementing methods to increase performance en-masse. The aim was to get things back up to 60fps, at least when zoomed in.

Frustum culling

The first problem was that the method I had used to visualise buildings (merging them into a single object) meant that a technique called frustum culling would no longer work. This technique doesn't render objects that are outside of the view frustum (screen edges) and so lightens the load on the GPU by only rendering what's visible. In my case, by using a single, giant object it was always going to try and render the entire thing because it was always in view.

Fortunately, solving the frustum issue was simple. All that was needed was to split the city into a grid of smaller areas that could fall outside of the view frustum.

ViziCities: Performance gridViziCities: Performance grid

While this patchwork-like approach didn't improve things when zoomed out, it dramatically improved performance when zoomed in. In fact, this fix alone bumped things to 60fps when zoomed in, simply because unnecessary objects weren't being rendered.

Level of detail

To take things a step further I decided to tackle the performance problem when zoomed out. There are a couple of approaches to take here; like removing smaller objects or reducing the level of detail (LOD) as you zoom out. I took the second approach and utilised the LOD functionality built into Three.js.

ViziCities: Performance LODViziCities: Performance LOD

What you can see is the new LOD approach in action. High-detail objects are in red, medium-detail in green, and low-detail in blue.

This technique didn't quite bump things up to 60fps but it was certainly a whole lot better!

Taking a new approach

Based on what we learnt from the prior work, specifically the performance issues, we decided to take a different approach to the visual design of the project.

What we already knew from the initial work with a small section of London (8x8km isn't very small, really) was that it performed very well on most devices we tested on. This was partly due to the smaller number of objects, though it's arguably more because there is a defined limitation on size that we're working within. This means that we can optimise performance in a controlled fashion as we always know how big the area we're rendering is. Compare this to the naive approach we originally took in which we aimed to render an entire city all in one go.

It's not all roses though. The problem with rendering a smaller area of a city is that you only see a small part of it. We're still committed to bringing entire cities to life with ViziCities so we're looking at a new approach that couples the controlled nature of a small geographic area with the ability to explore a large city.

Part of our solution is to set the 8x8km section of the city what we're calling a plinth, which really enforces the miniaturisation effect when combined with SSAO and tilt-shift.

ViziCities: PlinthViziCities: Plinth

There are many more parts to this approach that we're yet to implement; namely the parts that enforce that you're seeing just one small part of a much larger city, as well as the parts that allow you to navigate around that larger city.

We're still working on those bits but we're very happy with how the plinth concept is looking so far.

The next month

We've certainly come a heck of a long way since we started from scratch just over 30 days ago. If I'm to be honest, after compiling and writing this entry I'm actually in a state of disbelief about how much we've managed to get done in such a short space of time.

Rest assured, we haven't stopped and we're far from completing our vision.

Here's an unfairly-blurred look at something we're very excited about that we've not really mentioned before now.

ViziCities: FutureViziCities: Future

Be the first to get beta access

I hope this insight has given you a better idea about what ViziCities is and what we've been up to this past month.

Make sure that you sign up for the beta to be amongst the first to use it. You should also follow ViziCities on Twitter, as well as Pete and myself.

We're massively excited about what we're working on. Watch this space!

Brownbag and Firefox and you call

Posted By satdavuk

Hello everyone today at 3pm pst we are having a call in #marketing on about brownbag and firefox and you


Please see the agenda for call in details, and the Vidyo link

SUMO new Mobile design

Posted By satdavuk

sumo has released a new mobile site theme for people using smart phones such as the Iphone or android phones

please note in order to access it you need to be logged into for the new theme to be available


please note we are still testing the mobile theme the now


goto to test the site you

SweetIM Toolbar has been blocked for your protection.

Posted By satdavuk

Hello Everyone i am just writing to let you to know what as of today sweetim has been blocked

Why was it blocked? This add-on is silently installed along with other software, without user consent. It changes settings like the homepage and other search preferences, also without consent. It is generally regarded as malware and most users didn’t intend to install it. Who is affected? All Firefox users who have this add-on installed. What does this mean? Users are strongly encouraged to disable the problematic add-on or plugin, but may choose to continue using it if they accept the risks described.

When Mozilla becomes aware of add-ons, plugins, or other third-party software that seriously compromises Firefox security, stability, or performance and meets certain criteria, the software may be blocked from general use. For more information, please read this support article.

Blocked on December 6, 2012. View block request.

Building a big tent for teaching the web

Posted By Leo McArdle

Being a teenager who’s witnessed the UK’s shoddy Computing (or, as we like to call it, ICT) education I’m not suprised that according to a recent YouGov poll of UK parents and youth, commissioned by Mozilla, only 3% of British 8–15 year olds are currently being given the opportunity to learn and write computer code, while a majority are interested in it.

Let’s stop wasting kids time with teaching them (for the fith time) how to make a spreadsheet (or, rather, let’s teach them, but teach them in a lesson in which it all makes sense: Maths; the same goes for word processing, English teachers can, and should, teach it), but instead teach them skills which will be useful in the world of tomorrow. As a careers advisor who came into my school the other day said, “many of the jobs you will be going into don’t yet exist.” Many of those jobs will rely on some aspect of Computing.

So, what is Mozilla doing about it, specifically in the UK? Well, just before the Mozilla Festival (which I attended, and will blog about soon) Mozilla announced a new partnership focused at spreading digital literacy and building a big tent for teaching the web, all in the UK. Interested in how we’re going to do it? Or maybe you have an idea on how to get webmaking to reach the masses? Or are you an organisation focused on teaching digital literacy in the UK? If so, you should go read the full blog post here:

Sumo day this Thursday

Posted By satdavuk

This Thursday is sumo day

Start: October 11, 2012 End: October 12, 2012 Join us this Thursday, October 11th for the next SUMO Questions day.

Please take a few minutes answer just a couple of questions from new Firefox users. Check out the brand new SUMO UI – and enjoy using the newly redesigned site while answering user questions.

Join us, create a account and then take some time on Thursday to help with unanswered questions, Additional tips for getting started are on the etherpad. Our goal is to respond to every new question posted Thursday, so please try to commit to answering as many questions as you can throughout the day.

We’ll be answering questions in the support forum and helping each other in #sumo on IRC from 9am to 5pm PST (UTC -8).