On leaving the Guardian: dreams of digital journalism

After I joined the Guardian 4 years ago, and as I gradually learned how journalism was done, I started to piece together a vision of what I wanted to help the Guardian become, and what I believed digital journalism could achieve.

At the core stood two beliefs.

First, that good journalism is about helping people make good decisions for themselves and the wider society, make up their mind in an informed way based on facts, understand key issues, put events in perspective, entertain in an enriching way, and limit opinions being swayed by emotions and populist devices of hatred and fear.

Second, that new digital media provide an opportunity to fulfil that mission in a much richer and more impactful way. Historically, new media have always started by emulating their predecessors, before being exploited for their own possibilities and transforming our expectations: early photography sought to emulate the 19th century portrait paintings; the first public radio broadcast was of a live opera performance, with radio drama and news bulletin only appeared a decade later. Many argue that digital journalism is still in the first phase (emulation), waiting to make full use of the medium and transform journalism, the same way Google, Wikipedia, Google Maps or Spotify have transformed the way we think about information, knowledge, space and music.

Initially, the vision remained fairly abstract, but as I talked to and reflected with other colleagues who shared many of the same ideas, two themes emerged.

Structure & narrative, by evolving beyond merely copy/pasting newspaper content onto the Web as individual article pages, and toward weaving an editorial thread between pieces. Even paper spreads on a common topic exhibit stronger cohesion between pieces than their corresponding Web pages, loosely connected with hyperlinks or “See also” carousels. Recently, the NYTimes has started applying a clearer visual treatment to its magazine features, bridging the gap from print. But there is an opportunity to go further and to provide a richer experience for content with strong semantic metadata: reviews browsable by work, creator or similarity, travel articles by type or destination, key opinion pieces on a given topic.

Structure also applies to each individual piece of content: others have argued that the future of news is not an article, or certainly not as the serialised blob of information we are used to today. Structured recipes demonstrate the benefits of a domain-specific content model; can we do the same for news or features? Can we compose our journalism to easily answer questions readers come to get answers to? Can every term or concept in an article link into further context to deepen understanding, like Wikipedia does?

The Guardian pioneered a true digital format with liveblogs, but still struggles to present developing stories in a way that both summarises events so far for those catching up and let avid readers follow updates as they unfold. For longer-running stories, prior context is often lacking. Yet to be implemented is the Storyline ontology we devised years ago, and more importantly, a rich reader experience around it.

In parallel, a focus on relevance would provide readers more value out of the barrage of content they constantly face. The Guardian publishes around 600 pieces of content every day, which often accounts for more than 24 hours of uninterrupted reading. Not only is it impossible to read them all, but with one-size-fits-all promotion channels (single homepage per edition, Facebook groups, Twitter feeds), it’s also impossible to promote them all. Readers end up missing out on a lot of niche content they would have been interested in.

Importance is relative: what’s crucial to one might be trivial to another. By failing to combine editorial control with algorithmic personalisation, in an irrational fear of compromising their voice, publications fail to speak to everyone individually. If publishers retreat into being mere content producers, other more engaging services will provide relevancy instead, reducing opportunities to promote an editorial agenda or create a valuable brand people may want to support financially.

Overall, the vision was ambitious, and may not be shared by all, but it became my main motivation for working there.

Over time, however, I grew increasingly disillusioned with the Guardian’s ability to deliver it. While it absolutely has the right values and technical expertise, I felt the organisation as a whole wasn’t ready to embrace the medium so radically. And if one of the most technology-aware publishers can’t do it, it may be a sign that it won’t come from a traditional news organisation.

Towards the end of 2015, I realised it was time for me to go.


Four years is quite a long time and looking back, I am amazed by how much has been achieved in this time.

What started as a hacky liveblogging editor when I joined ended up becoming Composer, the new CMS for all Guardian content produced today. We grew the Scala API backend, matured the JavaScript frontend, introducing RequireJS and Grunt, replacing Backbone with Angular, building a rich-text editing framework.

In the last 18 months, I led the development of the Grid, the new image management system used to enrich both print and web content, and which is now Open Source. We distilled the best of what we’d learned on Composer and added cutting edge technologies both on the frontend (RxJS/ImmutableJS) and the backend (Hypermedia APIs with argo).

One of the many lessons I learned was that “it’s better to ask for forgiveness than for permission” (thanks Matt Chadburn), and there was indeed a wide enough margin of freedom to allow for a number of peripheral projects: Plumber, a Node-based tool for managing declarative web asset pipelines; Live monitoring radiators powered by composable Polymer Web Components; Teleporter, a Chrome/Firefox extension for internal staff to jump between different views of a piece of content, and identify any image on the site; @metagu, a pseudo-AI Twitter bot interface to the Guardian (quick guide and presentation).

In the process, I learned huge amounts about functional and reactive programming; Scala and modern JavaScript; the Web, Angular, responsive Web design, progressive enhancements and all the latest trends in frontend development; AWS, NoSQL, horizontal scaling and the best practices of backend development and ops. I even suffered brief epiphanies during which I almost understood how sbt works, and how to explain what a monad is.

More importantly perhaps, I learned how to build software: Agile, continuous deployment, shipping constantly. While building the Grid, we experimented with a completely lean development process, working directly with users, making hard choices to create value and build the right product efficiently.

Inspired and encouraged by some of my colleagues, in particular the ever brilliant Patrick Hamann, I also started giving some talks about the nature of the Web, building a CMS for the responsive web, server-less applications using Web Components, Hypermedia APIs, Reactive Programming and reactive UIs (twice), including as part of an Angular application.

This journey was only possible thanks to the fantastic people I worked with. The Guardian “digital development” team is blessed with many caring, passionate, generous and patient people who I am eternally grateful to.

The Guardian holds a well-deserved reputation for technical excellence, pioneering frontend performance techniques or promoting Agile Scala in the AWS cloud™. Such a reputation demands to be continually renewed, and I hope they can uphold it as they weather upcoming changes.

Of course, high standards weren’t limited to engineers or the digital department, they applied to the whole organisation. It was a privilege to work alongside so many talented reporters, editors, image/font guru extraordinaire and other editorial staff, and to witness the fascinating brainstorming in the morning editorial conference.

There is no easy way to leave what I consider a pillar of British democratic society and the most important news organisation in the UK, if not in the world, one that stood for revealing phone hacking, and that provided a platform for WikiLeaks or the Edward Snowden revelations. Witnessing Alan Rusbridger’s courage through it all was a true lesson in journalistic integrity. Plus, how often do you work with a product manager who was trusted enough to meet Snowden?


But if the last 4 years taught me anything, it is the importance of pursuing one’s vision without compromise, which in my case is about exploiting technological opportunities to have a positive impact on society.

So far, I have only worked for a traditional player in the news industry, with the Guardian, and in the case of my previous job, a small media startup strongly tied to the legacy major labels in the music industry. These legacy organisations were built and optimised for a pre-Internet world that no longer exists, and unfortunately, changing this mindset proves to be a huge struggle for even the most progressive of them. Whether due to fear or conservatism, they tend to lack the vision, ambition or courage to invest in harnessing technology to build the future, instead of attempting to perpetuate the past.

This is why I’m extremely excited to be joining Google to work on a publishing-related product. Time will tell if Google is the right place for me, but I can’t think of many more progressive companies in terms of leveraging digital technology to make an impact on society.