WebCGM Conformance Test Suite - Methodology & Contents Plan

Revision: 1.2

Date: March 20, 2001

By: Lofton Henderson, for NIST, in partnership with CGM Open

Document Versions

Rev

Date

Description of Change

0.9

2000-01-24

Initial outline.

1.0 2000-02-20 First CGMO-NIST review version.
1.1 2000-03-05 Incorporate CGMO & NIST revisions; add TOC and numbered sections (xslt post-process).
1.2 2000-03-20 Public release version.

Foreword

This report describes the proposed methodology, as well as the proposed content, for a WebCGM Conformance Test Suite.

Although not all issues regarding ultimate test suite content are completely resolved, the specifications in this report are sufficiently complete that production can begin on the WebCGM test suite content.

Table of Contents

1. Introduction
    1.1 Goals & Purpose
    1.2Scope of WebCGM Conformance Specifications
    1.3Scope of Project
    1.4Characteristics of the Suite
    1.5Constraints
    1.6Roadmap to this Document
2. Methodology Review
    2.1Overview
    2.2Progressive Testing
    2.3Process of Building Test Suite Content
    2.4 Principles Applicable to Test Suite Content
3. Inventory of Existing Materials
    3.1Motivation
    3.2Static Conformance Materials
        3.2.1ATA 2.4 Grex Certification Suite
        3.2.2MetaCheck Test Files
        3.2.3Vendor Test Files
    3.3Dynamic & Interactive Test Materials
        3.3.1Overview
        3.3.2WebCGM Tutorial Demo
        3.3.3Other
    3.4Test Harnesses
        3.4.1Overview
        3.4.2NIST ATA Harness
        3.4.3SVG Harness Design
        3.4.4Other
4. WebCGM Conformance Suite Plan
    4.1Plan Overview
    4.2Methodology
    4.3Modularization & Prioritization
    4.4Static Module
        4.4.1Development
        4.4.2Static Rendering Components
    4.5Dynamic Module
        4.5.1Development
        4.5.2Dynamic Module Components
    4.6Packaging, Organization, and Presentation
        4.6.1Standalone versus Browser
        4.6.2Test Harnesses
        4.6.3Test Naming Convention
        4.6.4CGM Template
        4.6.5Linking Order
    4.7Operator Script Contents
    4.8Generating the Reference Images
        4.8.1PNG for Static-graphics Test Cases
        4.8.2Reference Images for Dynamic Test Cases
    4.9Repository
5. WebCGM Conformance Suite Specificatons
    5.1Static Module
        5.1.1ATA-WebCGM Differences
        5.1.2ATA Test Case Inventory & Disposition
        5.1.3Prescription
            5.1.3.1Definition of Dispositions.
            5.1.3.2Remove.
            5.1.3.3Keep.
            5.1.3.4Modifications.
            5.1.3.5Additions.
    5.2Dynamic Module
        5.2.1Overview
        5.2.2TR Extraction
        5.2.3TP Synthesis
        5.2.4Categorical TP Prioritization
        5.2.5What to Test
        5.2.6How to Test It
    5.3Harness Details
6. Individual TC Descriptions
    6.1Overview
    6.2Format for TC Descriptions
7. Tools
8. Glossary
    8.1Basic Effectivity Test (BE)
    8.2Demo Test (DM)
    8.3Detailed Test (DT)
    8.4Drill-down Test
    8.5Error Test (ER)
    8.6Intelligent Graphics (IG)
    8.7Semantic Requirement (SR)
    8.8Test Assertion (TA)
    8.9Test Harness
    8.10Test Requirement (TR)
    8.11Test Purpose (TP)
    8.12Test Case (TC)
    8.13Traceability
9. Bibliography

1. Introduction

1.1 Goals & Purpose

WebCGM 1.0 has been a W3C Recommendation for two years. Currently, there is no publicly available test suite which is specific to WebCGM, and which is comprehensive enough to at least touch upon the major functional components of WebCGM.

The principal goal of this project is to produce an initial, usable test suite which covers all of the major functionality of WebCGM. It will be as comprehensive and detailed as resources permit. Comprehensiveness at a basic level of detail - i.e., full coverage of WebCGM functionality - has priority over detailed examination of any particular functional area.

The principal intended purposes of the suite are:

It is not a goal of this project to build a certification suite or establish a certification service.

1.2 Scope of WebCGM Conformance Specifications

WebCGM 1.0 ([WEBCGM10]) consists of static and dynamic functionalities. Static functionalities include the precise definition of a set of graphical drawing elements, for drawing static images. These are a subset of those descibed in the Model Profile of the ISO CGM standard ([CGM1999]), with a couple of additions (such as Symbol Libraries).

The dynamic functionalities define a set of Intelligent Graphics (IG) functionalities. They are specified by a set a set of Application Structures (APS) and APS attributes, and include:

WebCGM 1.0 hyperlink navigation functionality is defined in the context of HTML 4.0 documents. Standardized WebCGM behaviors assume an environment of current HTML-based Web browsers. Full functional conformance of a WebCGM viewer can only be measured in conjunction with a Web browser - in other words, only browser plugins (or their functional equivalent) can be fully tested for WebCGM 1.0 conformance.

Caveat. Testing WebCGM viewer conformance in a closely integrate browser environment introduces some inherent risk. When an actual result deviates from an expected result, it may be unclear whether it is the fault of the WebCGM viewer or the browser.

1.3 Scope of Project

Conformance test suites for WebCGM could address:

  1. conformance of WebCGM instances;
  2. conformance of WebCGM generators;
  3. conformance of WebCGM interpreters and viewers;

This project's scope is limited to the third - a conformance test suite for CGM interpreters and viewers. Taken together with the previous section, this defines the scope and target of this project: conformance of fully-functional WebCGM browser plugins.

1.4 Characteristics of the Suite

There are several kinds of test suites that one could imagine for WebCGM:

  1. A WebCGM demo suite;
  2. A QA test suite for product developers;
  3. A publicly available conformance and interoperability aid;
  4. A certification test suite for a rigorous certification service.

#3 most closely matches our principal goals.

A few real-world demo files are anticipated (#1). We don't plan any "goodness" testing. Presently, there are no plans for a WebCGM certification service, therefore there is no need for #4. (However, this work could provide a foundation, should such plans emerge.)

While the formality and rigor of a certification suite might not be needed, the WebCGM conformance suite should embody "traceability" - what normative statement or statements in the standard justify a given test?

1.5 Constraints

Previous analyses (see [HEND01]) suggest that a reasonably comprehensive conformance suite for a standard such as WebCGM is an effort of at least 1.5 engineering-years magnitude, and can easily be twice this much or more. This would be impossible to build "from scratch", given the resources available to this project. Therefore we will leverage existing test suites' content, techniques, and designs whenever possible. Changes will be kept to a minimum, prioritizing generation of needed new materials instead.

Throughout this document, we will nevertheless indicate design choices for an ideal (unconstrained) world. But we will craft the actual solution to maximize our principal goals within resource constraints.

1.6 Roadmap to this Document

2. Methodology Review

2.1 Overview

There is a significant body of experience in methodology for tests suites for graphical standards: CALS (CGM), VRML, ATA (CGM), and SVG. The following sub-sections summarize the parts that interest us for WebCGM.

2.2 Progressive Testing

The principle has been established and accepted for some time now, that test suites should be progressive in their organization and structure. Progressive means that the organization and application of the tests progresses from most general and basic, to more detailed and thorough.

The SVG conformance test work [HEND01] coined the term "Basic Effectivity" (BE) for the general, basic level of testing. It is a breadth first testing of the significant functional components across the breadth of the standard. Experience has shown that the largest benefit to implementations, for the effort expended, derives from the BE tests.

BE tests are followed by detailed, drill-down (DT) tests, which methodically probe all testable assertions of the specification. Full confidence in an implementation, and product certification, require a comprehensive body of DT tests. DT tests are typically much more numerous than BE tests. Both CGM (ATA) and SVG conformance test work have acknowledged the value of some "demo" (DM) tests, which provide more natural combinations of functionality from the applicable standard.

These generic test categories are applicable to the WebCGM suite, and are equally applicable to static rendering and dynamic functionalities:

  • Basic Effectivity (BE) - verify rudimentary capability across all functional areas;
  • Detailed (DT) - comprehensively probe all test requirements (TR);
  • Demo (DM) - a few "real world" WebCGM cases, from WebCGM generator products, ideally complex and not hand-crafted.

A full set of DT tests is almost certainly beyond the resource constraints of the project. A full set of BE tests may be achievable.

One additional category of tests was prescribed for the SVG test suite development:

  • Error (ER) - test interpreter/viewer reaction to identified error conditions in the WebCGM spec;

Because WebCGM does not specify error response for viewers and interpreters, this category is inapplicable to a WebCGM conformance suite.

2.3 Process of Building Test Suite Content

Again, there is a significant body of experience in both the theory and practice of building test suite content.

The following basic process is applied for construction of both graphical and non-graphical test suites - CGM, VRML, XML, DOM, SVG, etc. In overview:

  1. Analyze the specification for testable assertions. These are called "Test Requirements", TRs. (Note: other literature refers to these variously as Semantic Requirements, SRs, or Test Assertions, TAs).
  2. Write a "Test Purpose" (TP) for each test. A Test Purpose is simply: what we want to test - the conditions, requirements, or capabilities which are to be addressed by a particular test instance.
  3. Write and document a Test Case (TC) which realizes each Test Purpose.

An important benefit of following this process is traceability - a test case and its test purpose can be traced back to the supporting requirements in the standard. Ideally, traceability is actually realized in the test case presentation, with buttons or links that connect the test to its supporting requirements from the spec.

While this is the only way to ensure comprehensive coverage and defensibility of a test suite, it is also time consuming and expensive. As we intend to adapt existing materials from elsewhere for this initial WebCGM test suite, and get the most coverage from available resources, we will take short cuts.

Section 4.2 of reference [NIST-VRML] contains a discussion of this methodology - TRs (which it calls SRs) and TCs (the step of generating TPs is implicit in this reference, not explicitly treated as a formal step).

2.4 Principles Applicable to Test Suite Content

Some basic principles have been learned during the construction of previous test suites (ATA CGM, CALS CGM, and SVG), applicable to both graphics suites and others:

  1. Simple or Atomic. Each test purpose should be as simple and narrowly focused on a atomic (single) functionality. Example: chose one attribute or property and exercise it through a range, while holding other variables constant. The advantages to this approach are:
    • It is easy to see the cause, when an implementation fails (because only one variable is being manipulated).
    • The suite can span all of the functionality of the specification.
    • The size of the suite will be manageable - it will combinatorially grow out of control otherwise.
  2. Reducing number of tests. Without sacrificing the principle of atomic testing, the number of test instances (files) can be reduced by having a single instance combine multiple related test purposes. Example: for an attribute or property with a half-dozen different enumerated values, test each of the values in a "sub-test" of a single test case (test file instance). Counter-example (poor practice): the first CGM test suite sometimes had each instance test one value of an attribute, so that almost 30 test file instances to test the horizontal and vertical text alignment values.
  3. Progressive. For any functionality, the tests should be organized from easy and general to harder and more specific. This avoids wasting time and resources if the implementation is completely incapable in a functional area.
    • the simplest tests are the Basic Effectivity tests, BE;
    • the harder are the Detailed tests, DT (also called drill-down).
  4. Comprehensive. The detailed tests should try to methodically vary and test all values, plus boundary conditions and extreme conditions, of each parameter, attribute, or property.
  5. Self-documenting. The tests should be self-documenting. For example, a line-width test should have something like tick-marks drawn to delimit the correct width. Graphical (displayed) text should explain and/or label the pieces of the picture.
  6. Key combinations. #1 notwithstanding, there should be some number of tests which do vary more that one attribute or property at once. Especially, thought should be given to how implementations might fail. Example: CGM has separate but equivalent attributes for lines and edges of filled primitives. Since it is common for implementations to use a single stroke generator for both purposes, it is sensible to test that the state of line/edge attributes are properly saved and restored after drawing an edge/line.
  7. Real examples. The CGM V3 suite included some real world graphics arts and technical pictures.
    • these are called Demo tests, DM.
  8. Traceability. A test must be traceable back to a statement or statements in the standard's specification.

3. Inventory of Existing Materials

3.1 Motivation

Research ([HEND01]) shows that a comprehensive suite of a reasonably complex standard is an undertaking of at least 1.5 person-years effort. The resources of this project are a fraction of that. Therefore it is critical that we maximize use of existing materials: leverage what we can, and build what we must.

3.2 Static Conformance Materials

3.2.1 ATA 2.4 Grex Certification Suite

In terms of static graphics functionality, WebCGM and the ATA GRexchange profile have significant overlap. WebCGM 1.0 started with an early revision of the ATA profile, cut out unneeded features and elements, added a few new functionalities, and adjusted a few.

An ATA test suite was written (and is being used for certification service) for ATA revision 2.4. There are 258 test cases in the suite. These probe most (but not all) of the graphical functionality of the ATA GRex 2.4 specification. NIST added an interactive harness, which is based on an XML database description of the test cases.

A document has been written by Cruikshank, et al, at Boeing [CRUIK99], which compares WebCGM 1.0 and ATA GRexchange 2.5.

3.2.2 MetaCheck Test Files

In the process of developing the MetaCheck conformance test tool, CGM Technology Software (CTS) wrote several score simple test files to probe the non-graphics functionality of WebCGM - the APS and APS Attribute based intelligent content.

3.2.3 Vendor Test Files

The builders and providers of WebCGM viewers likely have significant collections of test files. In most cases, these likely would not align in a clear way with crucial testing principles, but they could be considered for the "demo" (DM) files of the test suite.

3.3 Dynamic & Interactive Test Materials

3.3.1 Overview

Compared to static graphics test materials, there is relatively little suitable existing material for testing the dynamic functionality of WebCGM.

3.3.2 WebCGM Tutorial Demo

Dieter Weidenbrueck of ITEDO GmbH put together a simple tutorial/demo file for WebCGM. This uses an HTML frameset to test a number of variations on the WebCGM fragment parameters, and to perform basic tests of HTML-to-CGM, CGM-to-CGM, CGM-to-GIF, and CGM-to-HTML navigation.

There are 16 sub-tests, each of which is launched from the same basic HTML page (frameset). Each test has an associated link to a text file, which contains Clear Text CGM code for the essential (V4 APS and APS Attribute) parts of the test. Integrated into a different test harness, these sub-tests are probably suitable as a subset of the needed Basic Effectivity (BE) tests for the dynamic functionality.

A Metacheck-validated version of this demo is publicly available ([WEBCGM-DEMO]).

3.3.3 Other

We are not aware of any other organized body of tests for the dynamic functionality. However, these areas could be productive:

3.4 Test Harnesses

3.4.1 Overview

All of the graphical test suites have certain components in common - test file instances, reference images, operator scripts which instruct how to run the test and define the pass/fail criteria. The test suites for interactive environments have added interactive tools to facilitate presentation of these materials in useful configurations - typically using a Web browser - and enable easy navigation through the test cases in the suite. These tools are "test harnesses".

3.4.2 NIST ATA Harness

NIST added a test harness to the existing ATA GRex 2.4 test materials, which comprised 258 .CGM instances and .GIF reference images, and approximately 25 HTML files which group the operator scripts for the 258 test cases. The harness is based on a comprehensive XML database in a single file, which has (TestCase) elements for each of the 258 test cases, as well as elements to define the test purpose and provide the operator script.

The viewing and navigation pages of the NIST harness are dynamically generated by JavaScript code which interacts with the XML database through DOM methods. There are two (HTML) frames. The left frame contains 3 selector forms (HTML FORMs) with pulldown lists, for selection by Version (1, 2, 3), CGM element/test category, or specific test case name. Use of these forms leads to presentation in the right frame of the test purpose, operator script, and link to view the GIF reference image, for the subset of test cases matching the selections in the left frame.

png image of NIST harness page

There is no facility to invoke a CGM viewer. The user is expected to cause CGM viewing to happen somehow, in parallel with using the harness to view the Reference Image and the operator script. In the preceding screenshot, the "View Reference Picture" button has been pushed, which brings up a separate browser window with the GIF image (shown partially overlaid on the principal harness window).

3.4.3 SVG Harness Design

The SVG test suite harnesses are also based on an XML database, which provides operator scripts and navigation links for each test case. Unlike the NIST harness, there is one XML instance per test case, defining navigation paths through the suite, and containing the operator script for the test case. Again unlike the NIST harness, these XML instances are precompiled by XSLT into the SVG navigation harnesses, a set of linked viewing pages.

In fact, there are currently 4 different harnesses available, to accomodate different SVG viewer configurations. These 4 harness sets are each precompiled from the same XML instances using 4 different XSLT stylesheets:

  1. HTML pages of PNG+OS presented along with HTML navigation links (but no SVG viewing);
  2. a set of all-SVG pages, which only present the SVG rendering and use SVG links for navigation;
  3. linked HTML framesets of PNG+SVG side-by-side with OS below and HTML navigation links;
  4. all-SVG pages that use SVG links for navigation, and use SVG facilities to present PNG reference image and SVG rendering side-by-side.

These options accomodate different viewer types - plugins and standalones - as well as different viewer capabilities. #3 is illustrated below:

SVG Harness

3.4.4 Other

Neither of these harnesses have traceability (spec and test requirement traceback and display) features built in. Such features can be seen in the NIST DOM test suite, for example.

4. WebCGM Conformance Suite Plan

4.1 Plan Overview

The following sub-sections describe how we intend to produce test suite. The next major section, WebCGM Conformance Suite Specificatons, presents those detailed content specifications which have been generated by these plans.

4.2 Methodology

An optimal methodology was outlined earlier, which has been developed through the construction of a number of previous test suites.

While such is necessary to ensure comprehensive coverage and defensibility of a test suite, it is also time consuming and expensive. As we intend to adapt existing materials from elsewhere for this initial WebCGM test suite, and get the most coverage from available resources, we will take short cuts:

The existing ATA CGM test materials don't have associated TR materials, nor do they have any sort of traceability or traceback features. These will not be developed retroactively. Rather, the test cases themselves will be subsetted, will be modified appropriate to the global differences between ATA and WebCGM, will be modified for some specific functional differences, and new test cases will be developed where there are serious omissions.

For the new materials that must be developed - mostly for the APS-related WebCGM behavior and navigation functionalities - the optimal methodology will be followed. In particular, traceability information will be generated (whether or not there will be resources to fully incorporate it in the first WebCGM test suite release).

4.3 Modularization & Prioritization

The WebCGM specification divides naturally into semi-independent modules: static & dynamic.

These can be worked on independently and in parallel.

The ideal prioritization is to make an entire breadth-first, basic effectivity (BE) test suite release first, of both the static and dynamic functionality. As remaining resources permit, or in a follow-on project, a drill-down (DT) release should be built.

Notes about BE/DT prioritization:

4.4 Static Module

4.4.1 Development

As indicated previously, we will leverage the existing ATA GRex 2.4 test suite with minimal changes. The changes to the test materials themselves will include:

  1. removal of some tests, functionality which is in ATA but not in WebCGM;
  2. global modification of all tests, according to different requirements for ProfileId, different information in the in-picture legend, etc.
  3. modification of specific functional pieces of some tests, e.g., conversion of 'text' elements to 'restricted text'.
  4. addition of new tests to cover any significant omissions in tests, whether due to:

The precise specifications for each of these 4 classes of changes will be done as follows. Cruikshank et al [CRUIK99] have generated a study of the relationship of the ATA 2.5 profile to WebCGM 1.0, as well as the Model Profile and a draft of CALS Rev B. The differences between ATA 2.5 (of the study) and 2.4 are relatively few and are documented.

This study will be used to derive a differences document, between GRex 2.4 and WebCGM 1.0. This differences document will be applied to the 258 test descriptions of the GRex 2.4 test suite. A list of specific dispositions of each of the GRex 2.4 tests will be generated. The changes will be implemented to produce the static graphics rendering module for the WebCGM suite.

Notes about production methods:

Note about what to test (ref. #4 above):

Issues with this plan:

4.4.2 Static Rendering Components

Each Test Case in the ATA Grex 2.4 test suite contains three principal components:

  1. .cgm file - the WebCGM instance;
  2. .gif file - a raster reference image showing a correct rendering of the WebCGM instance;
  3. the operator script (OS), which comprises a few terse sentences prescribing the actions the (testing) operator should take, and what the results should be (i.e., verdict criteria for pass/fail).

The WebCGM static rendering module will contain the same three components. However, ideally these changes should be made:

GIF-input and PNG-output is within the functional scope of available raster tools. A tool which could perform conversions in batch mode would make this task trivial.

The OSs of the ATA suite were written appropriately for a certification suite, which would be applied by possibly non-expert operators, and in which the pass/fail rating is paramount. The operator scripts of the SVG conformance suite are more useful and informational - a few sentences which state the test purpose, describe the visible components and sub-tests of the rendered test case, what the results should be, and allowable deviations of the test case rendering from the reference image. The goals and purposes of the WebCGM suite are more similar to the SVG suite, and the OS style should be changed. Effecting such a change would be time consuming.

Note about OS packaging.

For expedience, the initial WebCGM suite will retain the style (and content) of the NIST-ATA XML database. At least one test harness will be included.

Other supporting material should be generated for each test case:

Issues about the static rendering components:

Note about traceability links. One of the difficult issues is being able to link into the WebCGM spec at fine enough level of detail. For example, paragraph, sentence, bullet item may need to be the link targets. The spec does not have such anchors now. However, it would be easy to produce an XSLT stylesheet that would a-tag (anchor) all block-level elements, list items, etc in WebCGM. This would enable the traceability linking to be done. There is still a resource issue with doing it retroactively in the static rendering tests.

The XSLT Conformance TC of OASIS has devised a way to do this, starting with the XML version of the XSLT standard. See [XSLT-SPECLINK].

4.5 Dynamic Module

4.5.1 Development

The dynamic module comprises tests of the Intelligent Graphics (IG) functionality of WebCGM: structuring, linking and navigation, layering, and (optional) search. The IG functionality is defined by APS and APS attributes specifications in WebCGM. The dynamic material must be built from scratch.

Although we don't intend to be overly formal, a TR analysis of intelligent content portions of the WebCGM spec document will be done. We see no other way to get a handle on the scope and upper bound of what can be tested. For the list of TRs, we will derive a collection of test purposes (TP). Those will be prioritized and gathered into a collection of test case definitions.

The WebCGM spec provides a natural to way to group the IG functionality for TR/TP processing:

Our priority order for dynamic test development will be:

  1. objects, their behaviors, and navigation (including full fragment syntax support) if the central functionality of WebCGM and the top test priority;
  2. 'para' and 'subpara', as special cases of objects, should be prioritized equally as 'grobject'...TBD... objects, their behaviors, and navigation.
  3. 'layer' support is required by WebCGM, but layer functionality peripheral to the main purposes of WebCGM.
  4. 'para' and 'subpara' as searchable entities should be lowest priority - search functionality is optional in WebCGM viewers, but the spec defines how it is to be done, if at all.

Note. Although we will make a more or less comprehensive set of TRs and TPs for the dynamic (IG) functionality, we won't likely have the resources in this project to build it all. A complete BE level, plus some prioritized drill-down (DT) tests should be our goal.

4.5.2 Dynamic Module Components

The dynamic module will involve the same components as the static, plus some. There will be:

  1. a WebCGM test case file instance;
  2. some sort of reference image;
  3. some operator script material.

The reference image is not necessarily a raster image of the WebCGM file instance, in this case. For example, if the test case involves CGM-to-HTML navigation, the reference image might be the HTML target. An ideal solution for the dynamic tests might be a linked set of two or more reference images, indicating initial view, next, ... last. It is to be determined, whether or not we have the resources to pursue this (unlikely).

The dynamic materials will also involve some HTML pages and framesets for some test cases.

It would be ideal if the dynamic test cases could all originate out of the same sort of links in the test harness, as with the static tests. Whether or not this can be achieved will be determined after more detailed design of some of the test cases themselves. The same applies to other details of the components comprising the dynamic portion of the test suite - will be refined as the dynamic functionalities' tests are developed.

4.6 Packaging, Organization, and Presentation

4.6.1 Standalone versus Browser

As indicated in earlier discussion of WebCGM conformance scope, WebCGM viewers can only be fully-conforming to the WebCGM 1.0 specification if they are browser plugins or controls. There is no conformance subset which excludes the dynamic capabilities, or considers conformance in a non-interactive environment.

Note. For comparison, SVG defines conforming static and conforming dynamic viewers, and further qualifies dynamic functionality requirements with such conditionals as, "...in environments which are interaction-capable". (It is not clear that such hedging would be useful for WebCGM, which is specifically targeted at interactive network documents.)

Nevertheless, we should not arbitrarily require an interactive browser environment and plugin for testing functionality that is not dynamic, i.e., the static graphics tests. The ATA test suite materials are packaged as a set of test .CGM files, .GIF files, and Operator Script files. The ATA-NIST interactive harness incorporates the OS contents directly into the base .XML file which drives the harness. As we do not propose to modify the OS contents initially (but should do so eventually, when more resources are available), then there is no cost now to keeping the 3 static filesets for the WebCGM subset of the ATA test cases.

So for the initial release, at least:

  1. the suite shall be conveniently navigable and executable from a browser;
  2. but, the static-graphics tests will be usable for a standalone viewer.

4.6.2 Test Harnesses

At least one interactive test harness will be provided, to facilitate presentation of the various test material components, as well as to enable easy navigation around the test suite.

The harness of the NIST-ATA suite has some attractive attributes:

  1. it is already written(!), and works;
  2. it is based on an XML database, which gives flexibility for future growth;
  3. it's real-time JavaScript interpretation of the XML database is attractive from a maintenance perspective;

It also has some limitations:

Several alternative harnesses are available - e.g., SVG, CSS, DOM - and some of these resolve some of the limitations. However, attribute #1 is compelling - the NIST-ATA harness could be used immediately, with no modification, for static-rendering viewing of WebCGM test cases (which are in the ATA subset). Therefore we will use the NIST-ATA approach.

There are some modifications that should have high priority:

The details of these are yet to be determined - this will be done in the next project phase.

There are additional desirable modifications, but with lower priority:

Harness issues to resolve:

4.6.3 Test Naming Convention

The test naming convention of the existing tests in the ATA GRex 2.4 test suite are constrained to "8.3" filename limits - filename of at most 8 characters, followed by 3-character extension - and follow a simple file naming convention. The base filename is AAAAAANN, where "A..A" is a 6-character meaningful mnemonic or acronym (e.g., POLYLN, COLTBL, etc) and NN is an ordinal, 01, 02, 03, ...

We will keep this for the static graphics tests because: it exists, is adequate, and spending effort to improve it is not justified.

We will consider a more meaningful naming convention for new dynamic tests. Experience with the SVG test suite showed that a strong naming convention was useful, both for the management of the test suite repository, as well as with to facilitate auto-generation of harnesses.

(For reference, SVG's name convention is: chapter-focus-type-num. 'Chapter' is the SVG spec chapter name, and 'focus' is a functional focus within the chapter. 'Type-num' is a concatenation of the test type - BE, DT, ER, or DM - and its ordinal in the sequence of such tests - 01, 02, ... Examples: path-lines-BE-01, shapes-rect-DT-04, styling-fontProp-DM-02.)

Test naming issue to be resolved:

4.6.4 CGM Template

One of the principles of test suite construction is "atomicity", i.e., each test should consist of a limited number of sub-tests each of which test a simple assertion in isolation. A corallary of this is that the tests should be put into a uniform template. In the case of CGM, this template should give uniform definition to at least these aspects of all test cases:

There can be exceptions for test cases which specifically manipulate and test functionality affecting the standardized template aspects, e.g., addressability tests, font tests, etc.

In the ATA test case CGMs, there are two styles in use the Legend block. The simplest, found in the older tests, features:

Here is a sample Legend block (new style) from the ATA GRex tests:

Sample Legend Block

Missing from the older test cases but present in the newer is:

Missing from both older and newer:

The latter has proved useful (in SVG test suite construction) in judging accuracy of placement of test picture details.

The serial number is a method for ensuring that the raster reference image, the CGM instance, and the CGM rendering are all current, all agree. To be useful, it must unfailingly increment whenever any change is made to the CGM file, in fact whenever the CGM file is saved. A way to automate this is needed. (Note. Such was the case with a former HSI tool called MetaWiz, which defined test cases in a meta-CGM language - CGM with scripting extensions - and generated CGM instances on demand.)

A serial number is important for test suite maintenance and integrity. Its lack in the older (majority) of the tests should be corrected.

The template is not as regular across the existing test cases as it might be. However, with these exceptions missing serial number, it is adequate and resources should probably not be spent to further regularize it. Note, however, that the Legend contents - specifically the profile id and ed, test suite name, release level, and date - must be changed for all test cases instances. New test cases will be built to the newer style of template.

Template (priority) issues yet to address and resolve:

4.6.5 Linking Order

The overall linking structure is decided by the choice of the NIST-ATA harness. Effectively, you get an index of all tests in the pull-down lists of the (left) HTML forms frame. These result in the presentation of one or more TCs in the right frame - test purpose statement, operator script, and reference image.

By comparison, SVG decided a linking structure in which a BE layer is linked throughout the suite (next/previous buttons), DTs drill down from BEs (child button), BEs pop up from DTs (parent button), etc. There is also a Table of Contents (by chapter), but no index - the latter would be a useful enhancement feature, and is equivalent to the NIST-ATA left frame.

There seems to be little purpose in trying to retrofit such a navigation map onto the NIST-ATA materials, especially as there is less underlying structure in the NIST-ATA collection. (There may be some BE/DT classification and structuring in the new dynamic tests). However, navigation buttons for previous and next might be convenient (rather than having to go back to the left frame pull-down menu and selecting.

Linking issues to be investigated and resolved:

4.7 Operator Script Contents

The Operator Script will comprise a few sentences, written (per the NIST-ATA test harness) as an element and sub-elements of the XML TestCase element for the test case. The existing NIST-ATA OS content is a terse set of steps and points of evaluation for a non-expert conformance testing operator. We propose to deviate towards the informative and narrative for new tests. OSs for the WebCGM test suite should contain any or all of:

  1. what is being tested, i.e., a summary of the test purpose(s);
  2. what the results should be (esp for dynamic tests);
  3. verdict criteria for pass/fail;
  4. allowable deviations from the reference image;
  5. how to execute test, if there are any special instructions (esp for dynamic tests);
  6. optionality of features;
  7. prerequisites (other functionality used in the test);

#2 could conceivably be: "CGM rendering should look like the reference image ". However, some specifics could be pointed out, such as (for an accuracy test): "All lines should pass through the cross-hairs", or "Vertexes should be at the locations of the markers". For dynamic tests, #2 will be a narrative description of what should happen.

About #6, optionality. If a test is exploring an optional or recommended feature, that should be clearly indicated right at the beginning of the operator script. Only one WebCGM instance comes to mind: search priority viewers that support text search on the graphics.

#7 refers to a brief description of functionalities, other than the one under test, which are used in the test file instance.

Note. In addition to the other purposes of the Operator Script, a well-written Operator Script can provide an aid to accessibility - an important consideration for W3C standards.

4.8 Generating the Reference Images

4.8.1 PNG for Static-graphics Test Cases

This section will likely develop and evolve further. For now, some variation of this scheme (used initially in the SVG work) is likely to provide a successful fallback to more direct methods:

  1. View the WebCGM in a correct viewer, or
  2. View a "patch" file in a viewer.
  3. Screen capture (ALT-SHFT-PRINTSCREEN)
  4. Paste into Adobe ImageReady
  5. SaveAs PNG.

Note about #2. If there is no CGM viewer available which produces a correct image of the WebCGM instance, then write another instance which produces the desired picture. For example, if no viewer would render the Rectangle element correctly, then draw it as an equivalent Polygon. The latter is the "patch" file.

A better method would be: a direct method, WebCGM to PNG rasterizer, such as MetaRip. We will look into this further in the next project phase.

We also must address the conversion of existing GIF images to PNG (although a direct CGM rasterizer, with a batch mode, could remove the need for it). ImageReady could also be used to convert existing GIFs to PNG, but it is a manual process: Open GIF, SaveAs PNG. Raster format converters with command line interfaces (usable in batch) are available.

We have noticed that there is little relationship between the existing GIF image dimensions (e.g., 491x417) and the canonical WebCGM address space (1000x1000). This is not inherently bad, however if there were a more direct relationship (1-to-1, or 1-to-2), then raster analytic tools such as the pointer-coordinate tool of ImageReady could be conveniently used to diagnose accuracy questions - the reference image would be more useful.

Issues to address for reference images:

4.8.2 Reference Images for Dynamic Test Cases

The dynamic tests will usually have multiple reference images - at least two, an initial state and a final state. Ideally, the harnesses would incorporate features to deal with multiple images. It will probably suffice to have a single image (the final state, after a dynamic action such as link navigation or layer selection is triggered), if the Operator Script is sufficiently informative.

In addition, the the initial state be either CGM or HTML, and the final state could either be CGM or HTML. CGM reference images can be captured as for static graphics - either direct CGM-to-PNG production, or screen capture and save. HTML states (reference images) could be done by raster screen capture, or perhaps more efficiently by saving an HTML page itself. There may be issues with reproducability of exact screen issue in the latter.

These latter issues will be addressed and resolved in the next project phase.

Issues to address for dynamic reference images:

4.9 Repository

This is a maintenance issue. It need not be resolved now, but we will mark it and put a placeholder for future consideration. How is the WebCGM suite to be kept and maintained after initial release? For comparison, SVG uses CVS to keep all of the materials (and all working group members have r/w access to the repository.)

5. WebCGM Conformance Suite Specificatons

5.1 Static Module

5.1.1 ATA-WebCGM Differences

The reference document [CRUIK99] was analyzed and edited so that only the differences between ATA 2.5 and WebCGM 1.0 remained. The differences between ATA 2.5 and ATA 2.4 were applied, resulting in:

Table of ATA 2.4 versus WebCGM 1.0 differences.

(Production note. [CRUIK99] was a MS Word .DOC file. It was edited down to only the differences, and exported as HTML. The large and verbose MS HTML was processed through the W3C 'Tidy' utility. The resulting small and clean HTML table had some undesirable formatting artifacts. An XSLT stylesheet was written to process the HTML table and rewrite it with these artifacts corrected.)

5.1.2 ATA Test Case Inventory & Disposition

An XSLT stylesheet was written to process the NIST-ATA XML database, and produce a table of Test Case name, Element(s) (an informational tag in the XML database), test purpose (ditto), and a final column for disposition (default value, "Keep"). The information from the differences table was applied, and any dispositions other than "Keep" (remove, modify, or qualified keep) were edited into the table. Result:

Table of ATA Test Cases and Dispositions.

Note. The anomolies in the NIST-ATA XML database can be seen in the table. The "Elements" classification is sometimes missing, and is inconsistent in content (detail). The "CGM Purpose" information is not uniformly on target. As stated earlier, these aspects of the XML database should be cleaned up (but at lower priority than needed new tests, traceability improvements, etc).

5.1.3 Prescription

5.1.3.1 Definition of Dispositions.

The preceding table contains these dispositions for the ATA static rendering tests:

  1. Remove - do not include in the WebCGM suite in any form.
  2. Keep - include in the WebCGM suite, will only common global modifications (see below) applied.
  3. Keep as demo files - apply global modifications and segregate.
  4. Modify - apply global modifications, and they will also need some specific functional changes.

"Segregate" in item #3 indicates that the file would be grouped and linked with the "DM" files, if the suite were structured that way.

5.1.3.2 Remove.

35 of the tests relate to elements which are disallowed in WebCGM: segments, bundled attributes, some advanced geometry, etc.

Test Cases to Remove from WebCGM Suite
ASFTST02 ASFTST03 ASFTST04 ASFTST05 ASFTST06 ASFTST07
ASFTST08 ASFTST09 ASFTST10 TRANSP01 MFVERS01 FNTLST08
ALLELM02 CLIPNG06 SEGMNT01 SEGMNT03 SEGMNT04 SEGMNT05
SEGMNT07 SEGMNT08 SEGMNT09 SEGMNT11 SEGMNT12 SEGMNT13
HYPARC01 HYPARC02 PARARC01 PARARC02 PARARC03 PARARC04
NUBSPL01 SEGMNT02 SEGMNT06 SEGMNT10 PRTREG01

(Production note. This table was extracted from ATA-test-case table via a simple XSLT stylesheet.)

5.1.3.3 Keep.

The following common global modifications must be applied uniformly the routine, template-related parts of the tests:

  1. Metafile Id: change from ATA style - 33 chars with test case name at end - to simply the TC-name string.
  2. Metafile Description: the substrings must be changed as follows,
  3. Legend
  4. Change any 'Text' elements to Restricted Text:
  5. Change any reference to OCRB (to Helvetica):

Notes about #4 & #5. We do not yet know the extent of these changes. In this section, we are dealing with uniform changes to routine template aspects. It is possible that there are none of these usages in the template parts of the test files. Usages in the substantive body of the test will be dealt with in the "Modifications" section below. Second: a good candidate for the process is to CGMconvert to Clear Text (in batch), and then 'grep' in batch for the usages.

Notes about #1, #2, and #3. We are still considering labor-saving ways to accomplish these global changes. One possibility is via batch conversion to Clear Text, and then a combination of automatic (e.g., 'sed') and manual (text editor) procedures. A better possibility is recovery and restoration of the tool MetaWizard of the former HSI, which enables to convert and maintain the test case instances in a "meta-CGM" script language, that features macros, inclusion, and looping control structures. MetaWizard also has an automated serialization (serial number) feature.

Issues:

5.1.3.4 Modifications.

This is a preliminary list of modifications to existing ATA tests for WebCGM inclusion:

To do/resolve in next project phase:

5.1.3.5 Additions.

There are a number of candidates for addition to the static-graphics tests of the WebCGM, based on:

Here is a preliminary list of candidates for new tests. Initial recommendations are in brackets.

  1. Multi-picture test. [yes]
  2. Test with null Metafile 'id'. [yes]
  3. Test that clipping mode is 'shape'. [yes]
  4. Test additional bitonal tile compressions 0,1,5,6 (TBD: verify tests of "2" in ATA) [yes]
  5. Test additional tile compressions 0,1,5,9 (TBD: see if 2, 6, 7 are tested in ATA) [yes]
  6. ESCAPE elements in WebCGM which are not in ATA GRexchange.
  7. Symbol Libraries. [yes]
  8. Unicode UTF-8 and UTF-16 [yes-simple]
  9. Font Properties [no]
  10. colour models sRGB, RGBA, and sRGBA [no]

Note about #1. although it is under discussion to deprecate multi-picture metafiles, a significant amount of WebCGM fragment syntax is devoted to multiple pictures. It should be tested.

Note about #7 and related escapes. This is a major extension functionality of WebCGM, which also exists in ATA. It should be tested, even though it is (not-yet) in known use.

Note about #8. These tests should not problem exotic character sets, but rather probe simple, common sets in the given encodings. Initial tests could be:

Note about #10 and ESCAPE 45. Precise color models and transparency effects are very peripheral to the central purpose and constituency of WebCGM.

5.2 Dynamic Module

5.2.1 Overview

Because we are starting from scratch on the dynamic test cases - no existing test materials, per se - we will follow the ideal methodology for test suite construction:

We will present the first two of these in this report, and the other two will be part of the next project phase.

5.2.2 TR Extraction

TR extraction has been done for WebCGM 1.0 (1st Release, WebCGM-REC-20000121), resulting in the tables:

WebCGM TR Tabulation.

Notes about the tables:

  1. It was produced by reading the WebCGM spec and extracting the TRs manually into a table built in an HTML editor. Upon further reflection, this approach was backwards. The TRs should have been put into an XML database, from which a simple XSLT stylesheet could produce the HTML table for display.
  2. The WebCGM spec references in the table are manually constructed, "pseudo-Xpointer" style.
  3. The "TR identifier" entries in this initial version of the table are simply unique IDs which can be used to reference the TRs, produced by the "generate-id()" function of XSLT.

Potential improvements to the table and methodology:

Issues to resolve about TR extraction:

5.2.3 TP Synthesis

The above-referenced TR Tabulation was analyzed to produce a superset of Test Purposes:

WebCGM TP Enumeration.

We call this a "superset", because some TPs are redundant, and some very general TPs are made redundant by more specific ones. These are indicated by comments in the table (for example see the first TR). Other cases of apparent redundancy (e.g., very general TPs) are not commented - these may resolve into a single BE test, for example, the general TR statement of permissible structural variations of the fragment specification.

There are some issues yet to resolve related to the TP synthesis:

  1. Link configurations (HTML-to-CGM, CGM-to-CGM, or CGM-to-HTML) for fragment syntax tests - should all applicable configurations be tested? As BE tests? As DT tests?
  2. TP-to-TR can be many to one, not just 1-to-1. This is because there can be a number of contexts in which to test each TR statement (this becomes an issue of "combinations" - how much/many to test.) Example: the TR for 'objid' object addressing. The TP enumeration table is a first cut. Some of the TPs should be resolved into multiple TPs (be more atomic). Others perhaps are appropriate to leave as "compound" TPs, especially for a BE-level test. This determination will be addressed in the next project phase (actual test case description and production.)
  3. Also in the table, we are applying "pragmatic" limits - essentially selecting and stating priority TPs - in cases where there could be many more TPs generated (e.g., in different contexts and for different configurations).
  4. In some cases, especially picture behaviors, it seems prudent to check the behaviors in each of the link configurations (HTML-to-CGM, CGM-to-CGM, and CGM-to-HTML), because different software components and/or interfaces may be involved.
  5. It is still an open issue, whether to include the link-configuration TP variations (HTML-to-CGM, CGM-to-CGM, and CGM-to-HTML) on all of the TRs where it might be applicable. Pro: different browser/viewer logic paths. Con: combinatorics.

5.2.4 Categorical TP Prioritization

In terms of general functional category, we propose the following priority order:

  1. full URI fragment support;
  2. grobject and its attributes
  3. para and subpara as special cases of grobject
  4. layers
  5. para and subpara in "search" context.

By level of detail, we will prioritize:

Note that BE tests can (we believe) lump together a number of TPs without being too careful about atomicity. So no further TP factorization or refinement is required. It is an issue, how much this still pertains for DT tests.

There is an issue that should be addressed in the context of anticipated future functional editions (ProfileEd greater than 1.0) of WebCGM: testing of picture-selection keywords in multi-picture tests. This is an issue because there is current consideration of deprecating and eventually prohibiting multi-picture metafiles. We propose that these should be tested at the BE level, but postpone spending any resources on DT tests until the disposition of the WebCGM spec issue is resolved.

5.2.5 What to Test

Specific TPs and their Test Case generation will be prioritized in the next project phase. This is largely a pragmatic choice - how to get the most useful test suite with available resources.

5.2.6 How to Test It

The design and specification of a test case for each selected TP will be addressed in the next project phase.

5.3 Harness Details

The design and specification of the details of the test harness will be addressed in the next project phase. We will follow these general guidelines:

We will incorporate the existing XML file for static-graphics component. It's XML grammar should ideally be expanded XML for the new dynamic tests - add spec references, etc. The XSLT Conformance TC has been pursuing a test description grammar - first early reference can be found at [XSLT-XMLINSTANCE], for example (the grammar has grown considerably in most the most recent proposal, [XSLT-TINMAN]).

6. Individual TC Descriptions

6.1 Overview

The actual individual Test Case (TC) descriptions will be generated in the next project phase. Whether or not we will actually generate formal TC descriptions is undecided. The answer will be pragmatic - does it accelerate test case production. If we do so, the following format will be used.

6.2 Format for TC Descriptions

The material in this section has been adapted from the SVG conformance test project (see [HEND01]).

A formal TC description is most interesting when labor is being divided for specification versus production tasks - different people doing each. That will not likely be the case during this initial WebCGM project.

Nevertheless, the task of writing the description in at least "sketch" detail forces one to actually design the test in sufficient detail that it is then easy to write the WebCGM (and HTML) content to implement it.

It is the Conceptual Description of the test (see below) which is most useful.

In previous work, the a format similar to the following has been used:

  1. Name: using our test-name convention.
  2. Test Purpose: a brief one-sentence summary of what this TC addresses.
  3. Conceptual Description: brief prose description of the content of the TC. Typically one-to-few paragraphs long.
  4. Operator Script: Summary of test purpose(s), how to execute test, what to look for in result, some specifics about what constitutes pass/fail, allowable variations from expected results, etc.
  5. Associated Test Requirements. Ideally, links or pointers from the test to the WebCGM spec, would be subsumed by next (#6) if the document references can be implemented by hyperlinks.
  6. Document References: Hyperlinks or pointers from the test to the WebCGM spec.

The Associated Test Requirements (#5) and Document References (#6) are the crux of traceability. It has never been generated for the existing static-graphics tests (derived from ATA suite). It has been generated during the TR-TP phases of the dynamic tests.

7. Tools

This is a summary of tools which we have identified as useful for the next phase of the project:

Most of these tools are from the former HSI CGM technology base. We intend to solicit the CGM community for contributions of additional tools, especially viewers, graphical editors, and rasterizers.

Issue to address about tools:

8. Glossary

8.1 Basic Effectivity Test (BE)

A test which lightly exercises one of the basic functionalities of the SVG specification. Collectively, the BE tests of an SVG functional area (chapter) give a complete but lightweight examination of the major functional aspects of the chapter, without dwelling on fine detail or probing exhaustively. BE tests are intended to simply establish that a viewer has implemented the functional capability.

8.2 Demo Test (DM)

A test which is intended to show the capabilities of the SVG specification, or a functional area thereof, but is otherwise not necessarily tied to any particular set of testable assertions from the SVG specification.

8.3 Detailed Test (DT)

Also called drill-down tests. DT tests probe for exact, complete, and correct conformance to the most detailed specifications and requirements of SVG. Collectively, the set of DT tests is equivalent to the set of testable assertions about the SVG specification.

8.4 Drill-down Test

See Detailed Test.

8.5 Error Test (ER)

An Error Test probes the error response of viewers, especially for those cases where the SVG specification describes particular error conditions and prescribes viewer error behavior.

8.6 Intelligent Graphics (IG)

Graphics which contain non-graphical metadata, which associate non-graphical application-specific semantics or intelligence with the graphical elements.

8.7 Semantic Requirement (SR)

See Test Requirement.

8.8 Test Assertion (TA)

See Test Requirement.

8.9 Test Harness

The program, process, or context assiciated with test suite materials, which organize the materials and test results for presentation to the user, and facilitate execution of and navigation through the test suite.

8.10 Test Requirement (TR)

A testable assertion which is extracted from a standard specification. Also called Semantic Requirement (SR) or Test Assertion (TA) in some literature. Example. "Non-positive radius is an error condition."

8.11 Test Purpose (TP)

A reformulation of a Test Requirement (or, one or more TRs) as a testing directive. Example. "Verify that radius is positive" would be a Test Purpose for validating SVG file instances, and "Verify that interpreter treats non-positive radius as an error condition" would be a TP for interpreter or viewer testing.

8.12 Test Case (TC)

As used in this project, an executable unit of the material in the test suite which implements one or more Test Purposes (hence verifies one or more Test Requirements). Example. An SVG test file which contains an elliptical arc element with a negative radius. In practice (and abstractly), the relationship of TRs to TCs is many-to-many.

8.13 Traceability

The ability, in a test suite, to trace a Test Case back to the applicable Test Requirement(s) in the standard specification, and ultimately to the appropriate section of the standard.

9. Bibliography

  1. [CGM1999] Computer Graphics Metafile, ISO/IEC 8632:1999, Parts 1, 3, 4, http://isotc.iso.ch/livelink/livelink/fetch/2000/2489/Ittf_Home/PubliclyAvailableStandards.htm
  2. Computer Graphics Metafile Conformance Testing - Full Conformance Testing for CGM:1992/Amd.1 Model Profile, NIST SBIR Final Report, January 1995.
  3. [WEBCGM10] WebCGM 1.0 Profile, W3C Recommendation, http://www.w3.org/TR/REC-WebCGM
  4. [NIST-ATA] NIST CGM Test Suite for ATA Profile, http://www.itl.nist.gov/div897/ctg/graphics/cgm_form.htm
  5. [NIST-DOM] NIST DOM Test Suite, http://xw2k.sdct.itl.nist.gov/xml/dom-test-suite.html
  6. [W3C-CSS] W3C CSS Test Suite, http://www.w3.org/Style/css/Test/
  7. [NIST-XML] OASIS/NIST XML Test Suite, http://www.oasis-open.org/committees/xmltest/testsuite.htm
  8. [NIST-VRML] "Interactive Conformance Testing for VRML", http://www.itl.nist.gov/div897/ctg/vrml/design.html
  9. [WEBCGM-DEMO], a tutorial-demo of basic WebCGM principles, http://www.cgmopen.org/technical/index.html
  10. [ISO10641] ISO 10641, section 5, "Conformance testing requirements within graphics standards".
  11. [HEND99-2] "SVG Conformance Suite - Preliminary Design for Path", Lofton Henderson, 12 Nov. 1999.
  12. [HEND99-1] "SVG Conformance Test Suite Design - Outline of Proposed Approach", Lofton Henderson, 12 November 1999.
  13. [CRUIK99]: "Comparison of the ATA, WebCGM, CALS, and ANSI/ISO Computer Graphics Metafile Profiles", Cruikshank, Portillo, Ruff, 1999.
  14. [HEND01]: "SVG Conformance Test Suite - Test Builder's Manual", Rev 2.04, Lofton Henderson, 2001, http://www.w3.org/Graphics/SVG/Test/svgTest-manual.htm
  15. [XSLT-XMLINSTANCE], http://lists.oasis-open.org/archives/xslt-conformance/200006/msg00019.html
  16. [XSLT-SPECLINK], http://lists.oasis-open.org/archives/xslt-conformance/200009/msg00001.html
  17. [XSLT-TINMAN], http://lists.oasis-open.org/archives/xslt-conformance/200012/msg00000.html