x-posted from d.o:
"4877. That is where the tradition within the Drupal community of making predictions for the year ahead with regards to our software, our community and broader, the web, started. Node 4877, written at the end of the year 2003. We have come a long way since then.
This year we would like to know what you think the year ahead will bring for Drupal and, as a bonus, we would like to know what was the best prediction you found in the past. Where did we shine when it comes to vision or humor.
See older entries from 2005, 2006, 2007, 2008, 2009, 2010, 2011, 2012 and 2013. Read them.
And now predict for 2014 and reflect the last decade in this thread."
Oh, and happy Bday Drupal :-)
With 179 days to go, it is time to give some airtime to Dropcamp.nl. According to the website DropCamp (good name, the funny domainname dropcamp.us even better :-) ) is:
Organised by veterans of the Drupal community, DropCamp welcomes you to the city of Enschede for a weekend filled with Drupal like you have never experienced before. We will come together in the first weekend of July to teach, learn, and do business about Drupal in a relaxing and social setting. The DropCamp village will be a place where you will breakfast with hardcore Drupalistas, find business opportunities, and share ideas and drinks with newcomers.
A camp hosted at the University of Enschede, close to Germany and 100 days before DrupalCon Amsterdam an ideal place to nerd around and have some fun. Be sure to checkout Dropcamp on facebook and twitter, signup for the newsletter and maybe some people will come to the event by bike. Like the "come to DrupalCon Amsterdam by bike Tour de Drupal movement. So bring your own drush and tent and have fun.
When the three orange Dutch guys presented DrupalCon Amsterdam 2014 in Prague, they had a slide (#36) were they joked about that one should come to Amsterdam, The Netherlands by bike.
Two friends were funny enough to take this from "a joke" to "a practial joke". Rachel and Stefan created "Tour de Drupal", a community movement to get as many Drupalista as possible to visit DrupalCon Amsterdam 2014 in 330 days by bike!
If you come to this DrupalCon, there is no excuse, you have to come by bike and put yourself on the map. While you are at it, follow our friends on @TourDeDrupal as well. Even I come by bike, and so should you Dries!
There is bound to be more funny stuff coming from the community in Amsterdam, I hope to be involved in some of this and will post it here as well. There is for example talk of an Eurosongfestival with Drupal songs and a revival of the Kitten Killers so bring your guitar as well.
So in the closing ceremony we now have lists of the amount Megabits used, liters coffee drunken and number of flat tires… :-)
Commodity sinks, innovations rises. An old rule. That is the reason why the drop has to keep on moving, have shorter release cycles and adopt new technologies faster to make sure we don't become what we replaced; old outdated systems that are slow to adapt and fast to extinct.
There are two sides to this, we have to grab new technologies faster and dump older technologies sooner. And so be sure that adopting new technologies faster doesn't create a legacy we have to release faster. Or at least, this is my opinion.
"Nobody" in the world logs in with openID anymore. Many appliaction to application in backend still might be using it, but nobody uses it to authenticate anymore. The last bastion Janrain just announced that it will close the doors. Drupal will drop OpenID form core as well and might have done this as well a long time ago. So when Drupal 8 will see the light in janury 2014, and we still would have release cycles over a 1000 days, the maintainers will still be dealing with an OpenID implementation and supporting it in Drupal core 7 up to the time D9 ships, somewhere around Q4 2017. Extrapolating our current release cycles most likely later, much later.
This is not to criticise anyone,I think it was a wise step to include OpenID back in the Drupal 6 days and it is a wise step to remove it from D8. But the time between making these releases and hence these decisions should be shorter. And especially the time that it impacts the code to maintain.
You want to know another example of an innovation that was once great and is now holding us back? gzipping pages. In core ever since 4.5 and was a great feature (though I had some problems back then :-)) But it is wrong. Holding us back in duplicate functionality that has to be maintained and is better being served in another OSI layer.
Back when webservers didn't compress pages and elements by default, it made perfect sense to do so from Drupal. A great way to save bandwidth and deliver the pages faster to the user. But now all webservers compress pages (and other elements like a big word document as an attachments served form /files/ !) by default, it is code that has to go. The innovation was great, but it sunk down to lower in the stack and became a commodity in all major webservers. And thta is the risk with all innovations and if one keeps holding to innovations that are already commodity, one ends up over there as well.
This holds true for many elements of frontend performance. Right now is seems like a good thing to combine multiple CSS or JS files in to one file. But once SPDY becomes mainstream this can better be done in HTTP protocol, not in the CMS.
And traditional frontend performance states that we have to use sprites in the template.
While if we add one module and one line of code this is all done at the webserver level with image sprites.
And we should use selective DATA URI's in our template. Most frontend devs will puke; binary data in a template? We are some old ugly old tchnology.
Again, with one command, the webserver layer will migrate these smaller images from flat files to inline DATA URI's.
Take a look at this impressive list of options where modpagespeed -a webserver module- can help you with:
- Minimize Request Overhead: Rewrite Domains, Configuration file directive to map domains
- Other: Add Head, Add Instrumentation, Inline @import to Link, Make Google Analytics Async, Insert Google Analytics Snippet, Pedantic, Run Experiment
Now for some of these actions there might be a Drupal module (lazyloading), for some functions one has to write good CSS/HTML/JS (CSS above scripts), some need good content editors or backend processes (de-duplicate inline images, progressive jpeg's) and some are just not doable yet in the frontend in an easy way (DATA-URI's).
So as a frontend dev (ops), do yourself a favour and do use the page speed module out for Apache and nginx AND keep writing good templates. And as a community Drupal community member, make sure that we keep innovating on the top , and let code free at the end where it is better being served outside of our hands.
(btw Mike Ryan, more retro future at this pinterest board :-)
- Performance matter for all websites
- Performance is not just (80%) frontend
- SPDY kills 80% of your frontend problems
In the Drupal and broader web community, there is a lot of attention towards the performance of websites.
While "performance" is a very complex topic on its' own, let us in this posting define it as the speed of the website and the process to optimize the speed of the website (or better broader, the experience of the speed by the user as performance.
This attention towards speed is for two good reasons. On one hand we have the site that is getting bigger and hence slower. The databases get bigger with more content and the the codebase of the website is added with new modules and features. While on the other hand, more money is being made with websites for business even if you are not selling goods or run ads.
Given that most sites run on the same hardware for years, this results in slower websites, leading to a lower pagerank, less traffic, less pages per visit, lower conversion rates. And in the end, if you a have a business case for your website, lower profits. Bottemline: If you make money online, you are losing this due to a slow website.
When it comes to speed there are many parameters to take in to account, it is not "just" the average pageloading time. First of all the average is a rather useless metric without taking the standard deviation into account. But apart from that, it comes down to what a "page" is.
A page can be just the HTML file (can be done in 50ms)
A page can be the complete webpage with all the elements (for many sites around the 10seconds)
And a page can be anything "above the fold"
And then there are more interesting metrics then these, the time to first byte from a technologic point of view for example. But not just technical PoV. There is a website one visits every day that optimzes its' rendable HTML to fit within 1500 bytes.
Steve Souders (the webperformance guru) once stated in his golden rule that 80-90% of the end-user response time is spent on the frontend.
Speedy to the rescue?
This 80% might be matter of debate in the case of a logged in user in a CMS. But even if it is true. This 80% can be reduced by 80% with SPDY.
SPDY is an open protocol introduced by Google to overcome the problems with HTTP (up to 1.1 including pipeling, defined in 1999!) and the absence of HTTP/2.0. It speeds up HTTP by generating one connection between the client and the server for all the elements in the page served by the server. Orginally only build in chrome, many browsers now support this protocol that will be the base of HTTP/2.0. Think about it and read about it, a complete webpage with all the elements -regardless of minifying and sprites- served in one stream with only once the TCP handshake and one DNS request. Most of the rules of traditional webperf optimalisation (CSS aggregation, preloading, prefetching, offloading elements to different host, cookie free domains), all this wisedom is gone, even false, with one simple install. 80% of the 80% gone with SPDY, now one can focus on the hard part; the database, the codebase. :-)
The downside of SPDY is however that is is hard to troublshoot and not yet avaliable in all browsers. It is hard to troubleshoot since most implementations use SSL, the protocol is multiplexed and zipped by default and not made to be read by humans unlike HTTP/1.0. There are however some tools that make it possible to test SPDY but most if not all tools you use every day like ab, curl, wget will fail to use SPDY and fallback like defined in the protocol to HTTP/1.0
So can we test to see if SPDY is really faster and how much faster?
Yes, see Evaluating the Performance of SPDY-Enabled Web Servers (a Drupal site :-)
So more users, less errors under load and a lower page load time. What is there not to like about SPDY?
That is why I would love Drupal.org to run with SPDY, see this issue on d.o/2046731. I really do hope that the infra team will find some time to test this and once accepted, install it on the production server.
Performance as a Service
One of the projects I have been active in later is ProjectPAAS, bonus point if you find the easteregg on the site :-) . ProjectPAAS is a startup that will test a Drupal site, measure on 100+ metrics, analyse the data and give the developer an opinionated report on what to change to get a better performance. If you like these images around the retro future theme, be sure to checkout the flickr page, like us on facebook, follow us on twitter but most of all, see the moodboard on pinterest
Pinterest itself is doing some good work when it comes to performance as well. Not just speed but also the perception of speed.
Pinterest does lazyload images but also displays the prominent color as background in a cell before the image is loaded, giving the user a sense of what to come. For a background on this see webdistortion
If you are lazyloading images to give your user faster results, be sure to checkout this module we made; lazypaas, currently a sandbox project awaiting approval. It does extract the dominant (most used) color of an image and displays the box where the image will be placed with this color. And if you use it and did a code review, be sure to help it to get it to a real Drupal module.
From 80% to 100%
Lazyloading like this leads to better user experience. Because even when 80% of the end-user response time is spent on the frontend, 100% of the time is spend in the client, most ofthen the browser. The only place where performance should be measured and the only page where performance matters. Hence, all elements that deliver this speed should be optimized, including the webserver and the browser.
Now say this fast after me: SPDY FTW. :-)