Facility to import an existing, static HTML site structure into Drupal Nodes.
This is done by allowing an admin to define a source directory of a traditional HTML website, and importing (as much as possible) the content and structure into a drupal site.
Files will be absorbed completely, and their existing cross-links should be maintained, whilst the standard headers, chrome and navigation blocks should be stripped and replaced with Drupal equvalents. Old structure will be inferred and imported from the old folder heirachy.
See the setup section for details. Because of the number of settings, this is not just a point-and-go module.
This module uses no database tables of its own. It requires XML support on the server, this can be tricky if it's not already enabled.
Given a working system, the process is thus:
By following these instructions, you should probably be able to end up with a version of the old content in the new layout. For large sites (200+ pages) some extra tuning may be neccessary, eg using different templates for different sources.
Incremental imports, processing just sections at a time, or repeated imports as you tune the content or the transformation should be non-destructive. Re-importing the same file will retain the same node ID path, and any Drupal-specific additions made so far.
This is intended as a run-once sort of tool, that, once tuned right on a handful of pages, can churn through a large number of reasonably structured, reasonably formatted pages doing a lot of the boring copy & paste that would otherwise be required.
The existing file paths of the source content will be used to create an automatic menu, and therefore a heirachical structure identical to the source URLs. With path.module, appropriate aliases will also be created such that this will enable a drupal instance to TRANSPARENTLY REPLACE an existing static site without breaking any bookmarks!
A peek under the hood into what happens in what order
The more valid and more homogenous the source site is, the better. A
creation using strict XHTML and useful, semantic tags like #title
#content or something could be imported swiftly. One with a
variety of table structures may not...
Of course, this tool is supposed to be useful when dealing with messy,
non-homogenous legacy sites that need a makeover. Sometimes
regular expression parsing may come to the rescue for content
extraction, but that's not implimented yet.
I'm choosing XSL because I know it, it's powerful for converting content out of (well-structured) HTML, and I've had success with this approach in the past. Others may object to this abstract technology (XSL is NOT an easy learning curve) but the alternative options include RegExp wierdness or cut and paste. (which I may patch on as alternative methods - or someone else can have a go) Both approaches I've also used successfully in bulk site templating (over THOUSANDS of pages) but it's my call. Making your own XSL import template is non-trivial.
The module can use either the PHP4 and PHP5 implimentations (which are quite different) but the PHP modules do have to be enabled somehow. This can be tricky as they often require extra libraries to be put in your path somewhere. Please don't ask me for instructions, every time I've done it it hurts my head.
If you can see the words XSL or XSLT in your phpinfo() output, You should be fine. The module will test and warn you anyway.
The module also uses the famous HTMLTidy tool. There is now a PHP module that impliments HTMLTidy natively, but that needs to be installed and enabled. If you don't have access to that, we can run it from the command line. Find the appropriate binary release of HTMLTidy for your system, and place it in your PATH, in the modules install directory, or wherever you like, then define the path to the executable in the settings. This works fine under Windows too.
If this sounds complicated, and you have limited access to a Unix host and need to use it, there is an auto-installer that can attempt to set up tidy even on a box you don't have login access to.
An import template defines the mapping between existing HTML content
and our node values. It uses the XSL language because of the power it
has to select bits of a structured document, for example select=\"//*[@id='content']\"
... will find the block anywhere in the page, of any type with the id
'content', and select=\"//table[@class='main']//td[position(3)]\"
Will locate the third TD block in the table called 'main'. Both these
examples would be common when trying to extract the actual text
from a legacy site.
You can begin with the example XSL template, this contains code that attempts to translate a page containing the usual HTML structures like (either title or h1) and (either the div called 'content' or the entire body tag) into a standard, minimal, vanilla, sematically-tagged HTML doc.
It's likely that whatever site you are importing will NOT be shaped exactly like we need it to translate straight using this format. You have to identify the parts of your existing pages that can reliably be scanned for to define content, then come up with an XPath expression to represent this.
If your source, for example, didn't use nice H1 tage to denote the page title, but instead always looked like
<font size='+2'><B>my page</B></font>
... your template could be made to find it, wherever it was in the page
using
select=\"//font[@size='+2']/B\"
and proceed to use that as the node title.
No, the code is not pretty, and if Regular Expressions are a foreign
language to you, This is worse.
But this is why developers have been ranting for the last ten
years about using semantic markup!!
The uniformity, and the usefulness of the metadata detected in the
source files will play a big part here.
It's easier to develop and test the XSLT using a third-party tool, I recommend Cooktop. Be sure to set the XSL engine to 'Sablotron' which is the one that PHP uses under the hood.
Although it would be possible to configure a logical mapping system to select different import templates based on different content, at this stage the administrator is expected to be doing a bit of hand-tweaking, and predicting all possible inputs is impossible. Some of this sort of logic can however be built into the powerful XSL template, if you are good at XSL
Once importing is taking place, you can even filter it more to improve the structure of the input, for example by removing all redundant FONT tags, or by ensuring that every H1,2,3 tag has an associated #ID for anchoring. Yay XSL.
On the admin/settings/import_html screen, you can (if you wish):
Files and folders beginning with _ or . are nominally 'hidden' so are skipped and do not show up on this listing. While it's possible to list a thousand or so files, It may be a good idea to allow the listing to be more selective, to scale to larger sites. Do this by entering the Subsection to list before clicking list and waiting for every file on the server to be enumerated.
As mentioned in Usage, this module uses no database tables of its own. Pages are read straight into 'page' nodes. I guess it could feed into flexinode if your import files had extra parsable content blocks, and I've sucessfully used it to import other random XML formats (RecipeML) although the advantages of doing so are limited.
It's easy to imagine this sytem set up as a synchroniser, that could re-fetch and refresh local nodes when remote content changes. This would involve recording exactly what the source URL was (which isn't currently done) but would be a fun feature.
I may fork off the page-parsing into a pluggable method, so that a regexp version can be developed alongside, and be used for folk without XSL support.
How to leverage this to import a local site to a remote server? You must either unpack the source files somewhere on that machine, then provide the absolute path where the server can find them, or upload a zip package and I'll try to take it from there.(TODO)
Also TODO is a 'Spidering' method to try to import URL sites. Way in the future!
TODO Allow settings to set import content type
to something other than 'page' done
TODO Find a way to map more meta-data from the original page (assuming there is any to be extracted) to Drupal properties, eg get the contents of META keywords into Taxonomy associations
TODO There are issure when a page links directly to a file that would be regarded as a resource via an href. Most hrefs are re-written to point to the new node, but things like large images or word docs get imported under 'files'. The XSL rewrite_href_and_src.xsl attempts to correct for this, but there may be some side-effects. Always run a link checker after import.
The PHP4 XML parser (Sablotron) has trouble with duplicate attributes - if found in a tag (like from old bad HTML) all subsequent input will be flattened to plaintext. Older versions of HTMLTidy, however, do not detect and fix these for us. Make sure that tidy supports option repeated-attributes. It seems the commandline version fixed this somewhere between the 2000 and 2004 release. (Not sure about the PHP module version - it's PHP5, so should be OK)
Internal page anchors are still a problem in Drupal, but that should be fixed by an output-filter, not by HTML rewrite here.
I've gone to great lengths to rewrite the links from the new node
locations to relative links to the resources that moved over
into /files/ but there are problems. When a/long/path/index.html
links to its image by going ../../../files/a/long/path/pic.jpg
it works which is good. But as a/long/path/index.html
is also aliased to a/long/path
- that up-and-over path is wrong
now the page is being served from what looks to the browser like a
different place.
I don't favour embedding anything that hard-codes the Drupal
base_url, and we don't want to use HTML BASE. I want to continue to
support portable subsites, so embedding site-rooted links (/files/etc
)
is not great either.
Currently, by happy chance, going up one ../
too far
will get ignored by most browsers, so if you are not
running Drupal in a subdirectory, the requests for both style of page will
just work. Which will mean that 80% of cases should get by OK. The rest
may need an output filter of some sort developed some day
Long ago, I started building this with reference to the existing import/export module but I couldn't find too many common features. The transitional format the XSL templates convert into is a 'microformat' of XHTML (basically XHTML, but with strictly controlled classes and IDs). This is how I see a platform-agnostic dump of content should be exported, when this eventually morphs into import_export_HTML.