Skip to topic | Skip to bottom

TechWeb08.GruppoLTW22Docr1.62 - 02 Jul 2008 - 00:24 - ThomasHeigltopic end

Start of topic | Skip to actions
-- ThomasHeigl - 30 Jun 2008

Group 22 Project Documentation

This page describes the implementation of the PoliWiki protocol by GruppoLTW22. In the following, the technologies used, the architecture, as well as development and testing of the individual layers of the application are discussed.


Core Technologies Used


The architecture of the AC/AL part of the application was kept as clean and modular as possible. To avoid the verbosity of plain ECMAScript (especially for DOM manipulation), we made extensive use of two libraries: jQuery, for the compact syntax and the vast amount of available plugins, and Sarissa, a cross-browser wrapper around the native XML APIs. With the use of Sarissa, cross-browser XML manipulation and transformations become less of a pain and implementations even work in Internet Explorer. Where possible, we avoided the direct construction and manipulation of XML documents through ECMAScript and used client-side XSLT to do the hard work. The only downside of this approach is that we had to drop support for the Konqueror Browser, which, for some reason, does not have an implementation of XSLT at all.

When the user points his browser to the project's website, it downloads the PoliWiki "shell", a simple XHTML document that includes all the necessary libraries and launches the application. As browsers currently do not allow cross-domain XMLHTTPRequests (XHRs), the application uses a simple PHP-based proxy server to communicate with the outside world. XHRs are sent using jQuery's abstraction layer that provides many useful features such as callbacks on specific events. Because of inconsistencies we encountered in the browsers' handling of XML responses, the Sarissa library is used to parse documents from the raw response string. A client-side datastore keeps the XML data of the current page in memory, allowing quick changes of layouts and skins at any point in time.

Whenever possible, we kept static data outside of the script. Special pages such as the search page and the homepage are saved as external XML templates. ECMAScript uses a local, synchronous XHR to read the template and adds dynamic information using the Sarissa XML library.

Development and Testing

The AL was nearly entirely written with the Eclipse IDE and the Aptana plugin for Web 2.0 development. Although this might seem like a bit of an overkill, ECMAScript code-completion and debugging, together with all features of the Eclipse IDE like SVN integration and refactoring were the main reasons for sticking with this solution.

Testing was mainly done using Firefox 2.0 and 3.0 and three indispensable plugins: Firebug for inspecting the DOM and debugging XHRs, LiveHTTPHeaders to view the contents of requests and responses, and the Webdeveloper Toolbar for such handy features as CSS/HTML validation, automatic form population etc. Because of painful previous experience, however, the application had to undergo cross-browser testing from the very beginning. While Safari and Opera ship with their own development tools, in case of Internet Explorer we had to make use of the third-party software DebugBar. As already mentioned before, support for Konqueror was dropped mid-way during development in favor of a cleaner and more modern architecture based on client-side XSLT.

For unobtrusive, flexible logging we used the log4javascript library, that provides logging support similar to Java's log4j and displays log messages in a popup window.


Our implementation of the PoliWiki AC/AL provides all functionality requested in the project specification. The following "special pages" have been implemented:

  • a homepage with some dynamic content
  • basic and advanced search pages
  • a page for creating a new document
  • a page for creating a new version of an existing document
  • a page proving usage information

The homepage displays the datasources that are currently available in the application and a number of predefined, recommended queries that function as shortcuts to the content. The basic search page restricts the search to the titles of documents, while still providing full support of the *-wildcard. The advanced search page provides a more fine-grained interface that lets the user specify a range of parameters for his search, all of which support wildcards.

The pages for creating new documents and versions obviously contain forms. As the main content of a document can contain restricted XHTML markup, the cross-browser WYSIWYG editor FCKEditor was used to handle the input. It provides automatic filtering of the resulting markup, dialogs to include images and links, filtering of content that is pasted into the document from the clipboard and much more. Using a special jQuery plugin, FCKEditor can be added to a textarea element in an unobtrusive manner. For fields that allow multiple values, links are provided that allow the dynamic addition and removal of form fields. Form validation, finally, is done in an unobtrusive manner using jQuery's Validation plugin. Once a user submits the form, the submit button is disabled to prevent multiple submission.

At any point in time, the user can change the layout and/or skin of the application. In case he chooses a formatter that produces binary output such as PDF or LaTeX, the document is loaded in a new window, otherwise the new style is dynamically applied to the active page.


Core Technologies Used


The datasource's architecture is based around the open-source search server Apache Solr. The server, once deployed to an appropriate Web Container, exposes a REST-based API that allows the addition, updating, removal and querying of documents in a manner very similar to the one specified for PoliWiki. For example, a query specified by the protocol looks like etitle=*&ecreator=*, whereas in Solr it would be etitle:*+ecreator:*.

All that was left to do was to write three small Servlets - GetServlet, SaveServlet, and SearchServlet - that take PoliWiki requests, transform them to a Solr request, forward them to the server and finally transform the results back into a document of the PoliWiki protocol. These transformations between Solr and PoliWiki are implemented using version 2.0 of XSLT. Binding between XML and Java objects with JAXB is used to manipulate and inspect the XML documents that pass between the servlets and Solr. Incoming documents, that is new documents and new versions of existing documents, are validated against the schema to prevent malformed data from entering the system.

All configuration parameters such as address and port of Solr and the name of the datasource can be configured declaratively in an XML-file. The servlets access the configuration file using the dependency injection feature of the Spring Application Framework.

Development and Testing

All Java development was done using the Eclipse IDE, while XSLT stylesheets were developed and tested using the oXygen XML Editor. The application was tested using a mixture of unit testing and remote debugging of the "live" servlets. The biggest challenge was to bring the servlet container to support UTF-8 correctly: changes to Tomcat's configuration settings as well as parsing of incoming data with explicit specification of the encoding were necessary. Although Java uses UTF-8 as its default character encoding, there are still many issues when it comes to web application development.


Core Technologies Used


As with the datasource, the main architectural principle was not to reinvent the wheel, but modify and extend existing solutions. In Apache Cocoon we found a mature, well-documented open-source framework that supports all aspects required by the PoliWiki protocol.

Cocoon is a web application framework designed around the manipulation and transformation of XML documents. The developer defines so-called pipelines that declaratively describe what should happen with XML data. Each pipeline consists of three components: A Generator that produces XML data from some source, one or more Transformers that modify the XML data in some way, and a Serializer that outputs the XML data in some format. The figure below exemplifies this concept:

A Simple Cocoon Pipeline

Cocoon ships with a vast amount of predefined components and its relatively easy to develop one's own. For the purpose of the PoliWiki application, however, we made use of only a handful:

  • StreamGenerator to read XML data from the HTTP request
  • XSLTransformer to apply XSL transformations to the XML stream
  • XMLSerializer to write XML data to the HTTP response
  • FOPSerializer to output an XSL:FO document as PDF

Pipelines and how they are mapped to specific URLs is defined in an XML document called Sitemap. The mapping itself is highly flexible and can be done using wildcards, depending on the user-agent, on specific request headers and much more. Only one feature is not supported directly is mapping based on the content of the source XML. In case of the PoliWiki, however, this feature is required. The formatter has to inspect the incoming document to know which layout and which skin to apply. (Note: later versions of the protocol actually changed this requirement and allowed the specification of layout and skin via request parameters, but the formatter was already written at that time). The about a dozen lines of ECMAScript written for this purpose were to only code apart from XSLT written for the entire formatter application.

Two efficient formatters were implemented on top of this architecture, one produces clean, standards-compliant XHTML and the other basic PDF.

Development and Testing

Pipelines, Sitemaps and ECMAScript were written with a basic texteditor, while the XSLT stylesheets were developed and debugged using oXygen. As with the datasource, the biggest problem was to fully support UTF-8. The solution that finally proved successful was to write a Servlet Filter that is registered in the application's web.xml descriptor and forces the request to be treated as UTF-8.
to top

You are here: TechWeb08 > GruppoLTW22Doc

to top

Copyright © Fabio Vitali 2019 Last update of GruppoLTW22Doc on 02 Jul 2008 - 00:24 by ThomasHeigl