Thursday, December 2, 2010

Closure Compiler for OpenLayers 3.x



You probably heard about the JavaScript Google Closure Tools which were released as open-source (Apache License 2.0) by Google more then an year ago. It is a set of tools which are in use for development of Gmail, Google Maps, Reader, Docs and other popular Google products.

In this blog post I would like to summarize what cool features the Closure Tools offer to developers of large open-source JavaScript projects, and I would like to suggest (re)writing OpenLayers 3.0 the way, that it is fully usable with Closure Tools. We can all profit from what is already out there - and write smaller and faster web and mobile applications easier with the future versions of OpenLayers, if we decide to go this way.

The most important tool in the Closure set is the Compiler. All other tools are optional and are build around the features of the compiler. The compiler is usable from command line or as a web application. You can also HTTP POST your source code and get back the compiled code. The compiler can be utilized by the existing building scripts of OpenLayers easily.


The compiler has several cool features:
  1. It compiles readable JavaScript into compressed machine-readable JavaScript.
  2. Documentation of the code with JSDoc Tags is important: the compiler reads it and you get warnings during compilation for typos in documentation, wrong use of a @constructor, wrong type of a variable, misuse of a field annotated with @private and @protected, etc.
  3. If you write a reusable JavaScript library, such as OpenLayers, you formally export your public API - and the compiler optimizes your internal code.
  4. The end applications can be compiled together with the library - and then the unused parts of the library are removed from the produced code. Dependencies are solved automatically by the compiler.
  5. Compiler accepts constants to remove unwanted functionality - this allows compilation only for particular browser such as Mobile WebKit, for only one of Quirks mode or Strict mode, compilation without support of IE6, etc.
  6. Debugging with FireBug is possible even for the compiled version of the source code.
  7. Compiler supports generation of dynamically loadable modules, which can significantly speed up loading of the end application, because the code for advanced functionality can be loaded only when it is required.
Let's go through these points one by one:

1. It compiles readable JavaScript into compressed machine-readable JavaScript.


The compiler loads the whole source code of the supplied JavaScript application into memory, makes detailed analyze of the code, prepares internal graph of dependency and while knowing the syntax and partly also semantic of JavaScript language, it makes the compilation.

If you use the default "SIMPLE_OPTIMIZATION" then Closure Compiler behaves just as another JavaScript minimizer: whitespace and comment removal, renaming local variables and function parameters to shorter names (only symbols that are local to functions). It can be directly used with the OpenLayers Build Profiles as they are available now - and you get around 20% smaller OpenLayers.js file on the output, compared to the default jsmin-based minimization of the code. Check also latest #2871 and #2873 if you want to have smaller results for OL. BTW official releases of jQuery are now minimized with Closure compiler and SIMPLE_OPTIMIZATION.

The truly powerful is the compiler when it is used with the "ADVANCED_OPTIMIZATION". Then it uses several tricks to shorten the code further, including the dead code removal and function inlining. But the JavaScript code must be ready for the compilation and it can't contain buggy code patterns. It means that the programmers are pushed to write their JavaScript code in a more readable and maintainable way. Typical problem is with using of "this" outside of constructors and prototype methods. Programmers can't use some of the techniques which the JavaScript language offers, but this restrictions also help to keep the code readable and reusable by other people later on. In general, programmers should follow the rules described in the JavaScript Style Guide. There is also a tool for checking and automatic corrections of the JavaScript code called Linter which can help. OpenLayers 2.x is not following these rules right now, but I think for OpenLayers 3.0 it would make sense to write the code with these rules in mind.

2. Documentation of the code with JSDoc Tags is important: the compiler reads it


OpenLayers 2.x, same as other big JavaScript projects, are using JSDoc comments for inline documentation of the source code. Google Closure compiler is parsing such JSDoc tags, and then using it to enhance the code optimization process and to print warnings during compilation for potentially buggy code patterns or mistakes, if used with "--warning_level=VERBOSE".

In fact through JSDoc comments the JavaScript language is enhanced with complete type system for variables and with visibility of objects.

The JSDoc set of annotations and type expressions which Google Closure compiler understands is described at http://code.google.com/closure/compiler/docs/js-for-compiler.html.

The introduced type system is powerful. OpenLayers can profit from possibility to define your own object types, for example the "OpenLayers.Location" object can be a new type which always must contain ".x" and ".y" with type float. Compiler can enforce this: whenever a function expects "OpenLayers.Location" as a parameter you receive warning during compilation if you use an object which does not comply.

3. Public API is formalized - the compiler needs it


It is important to have a formal definition of the public API which you are exporting from the library such as OpenLayers - compilation of a publicly usable JavaScript library is not possible without that. It means you are pushed to keep the API clean - as well as documented correctly with JSDoc.

4. The end applications can be compiled together with the library


The end-user applications can be programmed traditionally: So the OpenLayers library is used as a standalone OpenLayers.js script, which is included in the website in the HTML header. The application itself has a separate code base, and uses the public OpenLayers API. This approach is common with OpenLayers 2.x.

Alternative approach is that the code of the application is merged with the code of the library and then the result is compiled together - this way the unused parts of the library are removed automatically from the final code. Dependencies in the code are solved by the compiler.
Only the JavaScript functionality, which is really in use by the end application is then part of the final compiled .js file, which is used for deployment and production use.

5. Constants defined during compilation to remove unwanted functionality


Variables annotated with the @define JSDoc tag can be redefined during compilation, and then the compiler can recognize blocks of code conditioned with such variables as unreachable. Because the dead code is removed automatically - such source code is completely stripped from the final result .js file.

Typical use case for this is a block of code executed only if some DEBUG variable is set to true, but there are also other use cases:

A source code of a JavaScript library can contain plenty of functions to handle differences between Quirks and Strict mode of a web browser, determined by the DOCTYPE tag in the beginning of  the HTML page. The general JavaScript library must support both modes, but if you are a developer of a web application you control in which rendering mode is your application used. This means the handling of Quirks mode can be easily stripped from library which you are using.

Similarly - if you are developing a mobile web application for iPhone or Android platform, you don't really need in your application code which is specific for Internet Explorer or for older versions of Firefox or other browsers.

The Closure Library, the standard JavaScript library coming with Closure Tools, supports already this kind of conditional compilation for different browsers (via goog.userAgent.ASSUME_MOBILE_WEBKIT) and rendering modes (goog.dom.ASSUME_STANDARDS_MODE).

I saw at FOSS4G that for OpenLayers 3.x is planned closer interoperability with existing general libraries such as Prototype, jQuery or Closure Library for accessing DOM, operation on strings or other basic functionality - to not reinvent wheel inside of OpenLayers and eliminate duplicate code for the same functionality in final applications, which are already using one of these libraries anyway.
The conditional compilation with constants supported by the Closure compiler can help in this case.

6. Debugging with FireBug


The minimized JavaScript code is normally very hard to debug, because the code is obfuscated and there is no reference to the original formatting and variable names.

Closure Compiler comes with two handy approaches for debugging the code compiled in the  ADVANCED_OPTIMIZATION mode:

First is the parameter "--debug=true", which causes that the renamed variables which are normally shortened to one or two letters keep meaningful names - e.g. "OpenLayers.Location.prototype.setValue" becomes $$OpenLayers$Location$$$$$setValue$$, instead of "aa" for example. This parameter is very often used with "--formatting=PRETTY_PRINT".

The second option is usage of the final compiled code together with the Inspector extension for Firefox. Compiler then generate a mapping file between original source code and the compiled code and the extension simplifies debugging with Firebug, once such mapping file is loaded.

7. Dynamically loadable modules


Google Maps API V3 as well as other Google products are compiled to load only a small core (bootstrap) of the necessary functionality and later load extensions with more code.
This approach can significantly speed up the first appearance of a web application, which is important especially on mobile devices.

The Closure compiler of course supports this form of compilation, with parameters on the command line.

The whole project can be compiled with a simple Makefile or custom merge and build scripts, but there are also several tools available for simplifying the compilation, module dependencies and handling of the code which is spread over several files - these tools are useful especially if you are using Closure Library or you need compilation into modules: Plovr, ClosureBuilder, or Closure Modules. With Closure library comes also closurebuilder.py and depswriter.py.

I am very keen to discuss with OpenLayers developers and the community the application of Closure compiler and it's advantages and disadvantages.

We have used the compiler and Closure library already at some projects which are now in production use, so we have practical experience with these tools - for example from the development of the web interface for our MapRank Search product.

If there is an interest I can publish another blog post with a code of a simple project and step-by-step guide demonstrating the use of Closure Compiler and Library.

The Closure tools are definitively worth attention for any web developer. Please write into the comments what do you think about the subject of Closure Compiler and OpenLayers V3.

Thursday, November 25, 2010

National Library of Scotland: Georeferencer


Various historical maps from the collection of National Library of Scotland were digitized thanks to our tool Georeferencer. You can find more information about this project as well as a step by step guide at: http://maps.nls.uk/projects/georeferencer/

The first public release, which included 1,000 early maps of Scotland, was in November 2010. In November 2010, this was the first public release and included 1,000 early maps of Scotland. A wide variety of maps was included — maps of the whole of Scotland, county maps, town plans, coastal charts, and estate mapping, dating between 1580 and the 1920s. About half of the maps were georeferenced in the first 16 months, with some categories of maps, such as town plans, completely georeferenced during that time (http://www.dlib.org/dlib/november12/fleet/11fleet.html).

Why to georeference maps?

It allows you to compare historic maps directly with present day satellite images and dynamically change the transparency. At the same time, it is very easy to share, use and georeference the maps in more detail and view the maps alongside other georeferenced historical maps of the same area. Last but not least it helps to improve search methods to find maps in the future (with an intuitive system such as the MapRank Search).

Monday, June 14, 2010

Apple presented GDAL2Tiles on the stage of WWDC




Apple engineers during the technical sessions from WWDC 2010 recommended our software created by Klokan Technologies.

What did they say? You can watch the video (available after free registration) at http://developer.apple.com/videos/wwdc/2010/.

James Howard, Software Engineer Map Kit Team, Apple Inc. on the stage of WWDC 2010 in the USA said: "Gdal2Tiles are a really great utility. I recommend using it."

Apple even officially published a source code for iPhone application, where you just drop in the GDAL2Tiles generated tiles, and you get and offline viewer: https://github.com/klokantech/Apple-WWDC10-TileMap




Friday, May 21, 2010

Custom style for Google Maps

I am impressed by the new functionality in Google Maps API V3: StyledMaps 

You can create your own dynamic style for Google Maps tiles - and change the look and feel of the base maps which you want to include in your website.



It is quite a natural step, something what open-source projects like Cascadenik and products of other companies such as CloudMade made possible already some time ago. The adoption of this functionality by Google brings this possibility to masses.

If you want to define your own styles for the map you can use the online Style Map Wizard tool. More info and the official announcement is available here.

Wednesday, May 12, 2010

FOSS4G 2010: Vote for OldMapsOnline.org!

I submitted a proposal for the presentation in the FOSS4G conference in Barcelona and I would like to ask the OSGeo community and other people who plan to visit the FOSS4G conference to vote for the presentation: "OldMapsOnline.org: Open Source & Online Tools for Old Maps".

In the OldMaspOnline.org project we are developing open-source software and designing online tools for collaborative georeferencing, annotation, 3D visualisation, accuracy analysis and geometadata specification for old maps (or in general any raster images) from the web browser.



One of the very interesting and practical results is an online service for georeferencing of the scanned maps or any other online images. You can directly from the webbrowser georeference any online image published as JPEG, any image already published on Wikimedia or Flickr, collection of online tiles (Zoomify, DeepZoom, ...) or imagery published on one of the supported image servers (IIPImage, Lizardtech Express MrSID, DigiTool, ...). In this moment the service is under active development and I would like to announce the results at FOSS4G 2010 in Barcelona.

In OldMapsOnline.org we have also contributed to several open-source projects. You can find our code in OpenLayers (Zoomify support in 2.9), GDAL (GDAL2Tiles), GeoTools, IIPImage and other FOSS projects.

We have produced two new open-source projects:

- MapTiler: user-friendly tile map publishing a la Google Maps: http://www.maptiler.org/

- IIPImage JPEG2000: open-source server software for fast delivery of the ultra high resolution raster imagery directly from JPEG2000 or TIFF files. MooViewer, OpenZoom, Zoomify, DeepZoom or OpenLayers provides the attractive user-experience on the client side (usually in the web browser). http://help.oldmapsonline.org/jpeg2000/

If you are interested to know more about our project, feel free the explore our websites: http://help.oldmapsonline.org/.

To vote for the presentation please visit: http://2010.foss4g.org/review/ and follow instructions there - the deadline is this Friday (May 14th)! I am looking forward to meet you in Barcelona ;-)

Friday, January 15, 2010

OziExplorer OZF format specification + open-source decoder!


Yesterday I was testing the OZEX project, which intends to be an open-source replacement for the popular OziExplorer software. The most interesting on OZEX is that it is able to decode and display the Ozf2 and Ozfx3 binary files on Linux and other platforms and that the decoder is completely open-source!
I know about other nice and open-source projects targeted to OziExplorer users with an advanced GUI and interesting features, look at the QLandKarte GT screenshots for example.

This is for the first time I see open-source implementation of the OZF2 and OZFx3 binary format!
It would be excellent to create a decoder also in the GDAL library (as a driver) because it would bring the OZF reading/decoding functionality into several open-source projects.

I have submitted to the gdal svn a documentation of the format derived from the source codes, some sample files as well as links to the OZEX GPL code. More sample files can be generated with the img2ozf utility (runs well under Wine).
Unfortunately I am now busy on another projects but I hope that some of the GDAL developers finds a bit of time to do the coding of the OZF driver...

GDAL already has a preliminary support for the OziExplorer's .map files (textual metadata, think of advanced ESRI World File with included info about the map projection), but support for the binary formats from the OZF family (version 2 and version 3) would move the compatibility to a different level.

OziExplorer is a very popular in the GPS and GeoCaching community. Support of the maps generated or georeferenced with this software in the OSGEO open-source tools would be great! I hope to see it in the near future in MapTiler, GRASS, QGis, MapServer, GeoServer and all the other FOSS GIS tools! Anybody interested in the coding for GDAL?

Wednesday, January 13, 2010

IIPImage JPEG2000: Free Software for Zoomable High Resolution Online Images

As a technical manager of the OldMapsOnline.org project I am very pleased to post to this blog a note about our results:

Moravian Library and the OldMapsOnline.org project are proud to announce the release of a new version of the open-source IIPImage server software (http://help.oldmapsonline.org/jpeg2000/).

The freely available IIPImage software can be used for stunning online presentations of scanned documents, paintings, maps, books, newspapers, photographs or other high-resolution images on the web directly from JPEG2000 or TIFF files.

The new version allows direct publishing from JPEG2000 images to a wide variety of different client technologies based on AJAX, Adobe Flash or Silverlight. These include popular pan&zoom viewers based on Zoomify or Seadragon technology (including the Seadragon AJAX viewer and the Seadragon iPhone application) as well as it's own AJAX enabled IIPMooViewer. The documents provided by IIPImage can be displayed in any web browser and on a number of platforms - Windows, Mac, Linux or iPhone.

The software is primarily targeted at institutions who operate their own server connected to the Internet and who want to publish large collections of digital images directly from JPEG2000 or
TIFF files.

Institutions who does not have the necessary infrastructure can follow our alternative tutorial at http://help.oldmapsonline.org/publish/ on how to achieve the same using standard web hosting and free software.

IIPImage is a light-weight client-server system for fast and efficient online viewing and zooming of ultra high-resolution images. It is designed to be bandwidth and memory efficient and usable over a slow Internet connection even on gigapixel sized images.
It is available for free, under an open source license (GNU GPL). We recommend installing
the software on a Linux (or other UNIX) server. We have prepared an easy to install binary package for Debian and Ubuntu with step-by-step instructions for installation.

JPEG2000 support has been implemented using the Kakadu library, which provides one of the fastest implementations of the JPEG2000 ISO standard and is redistributable for non-commercial use.

The enhancement of IIPImage was developed by the Moravian Library and the OldMapsOnline.org project with the support of grants from the Ministry of Culture of the Czech Republic.

The Moravian Library (http://www.mzk.cz/), based in Brno, Czech Republic, is a research institution and a legal deposit library. Project OldMapsOnline.org (http://www.oldmapsonline.org/) is a research project of the Moravian Library that aims to develop software to assist in the management, manipulation and visualisation of historical map collections on the web. The project team is designing online tools for publishing, collaborative georeferencing, annotation, 3D visualisation, accuracy analysis and geometadata specification for old maps.

For more information and for the IIPImage JPEG2000 software, see http://help.oldmapsonline.org/jpeg2000/.