Summary of the Third PIF Workshop

Aug. 4th - 5th, 1997

Stanford University.

Participants: Christoph Bussler (Boeing), Adam Farquar (Stanford Univ.), Michael Gruninger (Univ. of Toronto), Pat Hayes (Beckman Institute at the Univ. of Illinois & Univ. of Southern Florida) , Amy Knutilla (NIST), Jintae Lee (Univ. of Hawaii & MIT Center for Coordination Science), Chris Menzel (Texas A&M Univ. & KBSI), Adam Pease (Teknowledge), Steve Polyak (Univ. of Edingurgh), Craig Schlenoff (NIST), Mike Uschold (Boeing)

The Workshop had three major goals: solidifying the semantics (of the PIF CORE and the extensions), assessing the extension strategy, and exploring collaboration possibilities with other related projects. The workshop was divided accordingly into three parts, with the semantics issues discussed on the first day and the extension and the collaboration discussed on the second. The schedule is attached at the end of this summary. The report on the semantics is being written by Dr. Gruninger and Dr. Menzel. The following provides the report on the other two topics.

1. SEMANTICS (Aug. 4th):

1.1 Architecture for PIF

The Semantics session opened with a presentation of the logical structure of PIF. Everyone agreed that PIF is a theory -- a set of axioms for the primitive constants and predicates of the PIF language along with a set of conservative definitions of relevant classes and relations, along with a set of foundational theories. A foundational theory is an expressively powerful theory that can be used to define the PIF primitives and derive its axioms as theorems, and which, therefore can function as an appropriate framework for evaluating the expressiveness of PIF and the soundness and completeness of axiomatizations.

For usability, the theory of PIF has an object-oriented presentation which preserves the semantics of the theory.

1.2 Axiomatization of PIF

Chris Menzel offered a candidate axiomatization of the basic PIF classes and relations, along with some informal expositions of PIF's intended semantics. The document is still being revised, but is available by request from Jintae Lee.

1.3 Metatheoretic Aspects of PIF

Several constructs within PIF require the introduction of classes and relations defined in the metatheory of the theories underlying PIF.

In particular, preconditions, postconditions, and decisions are defined as classes of sentences. In this session, Michael Gruninger presented an axiomatization of these classes using KIF-Meta.

The advantage of this axiomatization is that we can formally define different syntactic restrictions on these sentences within KIF-Meta. For example, ground sentences (with no quantifiers) or conjunctive sentences are simply subclasses of sentences within KIF.

The proposal was that the language of PIF-Core would not contain any syntactic restrictions. Rather, such restrictions would be specified as a protocol, or exchange convention, that can be associated with different PSVs. Protocols would be defined using KIF-Meta.

1.4 Object-Oriented Presentation of PIF

Once we have agreed on the definitions and underlying theories for PIF, we can proceed to the next problem -- defining an object-oriented presentation for PIF which preserves the semantics of the theories.

We agreed to specify all classes and relations in the axiomatization using the Frame Ontology, and then see how easy it would be to map this object-oriented presentation to other object-oriented presentations, such as UML.

2. EXTENSION (Aug. 5th)

It was agreed earlier that the best way to drive the extension is with a set of concrete scenarios. A few scenarios had been evaluated as a framework for evaluating and extending the PIF-CORE. We had also identified the various functions that such scenarios can serve and from them generated desiderata for an ideal scenario. Based on this study, we decided to create our own scenario by building on the supply chain process scenario that the Workflow Management Coalition used for the interoperability demo at the Business Workflow conference in Amsterdam, Oct. '96. The supply chain process scenario was chosen because it is realistic and would facilitate the comparison between PIF and the interoperability work of WfMC (WPDL).

In order to have the scenario reflect our own requirements, however, we extended the WfMC scenario by including the front and the back ends. That is, in the current scenario, the supply chain process is modeled using the Process Handbook - a process tool that is developed at MIT, which is then translated into a PIF representation, which is in turn translated into an IDEF3 description, which is ultimately used by the ProSIM software - a simulation support tool developed at KBSI. We will also pursue other scenarios concurrently. However, we decided to focus on this scenario if the available resource dictates a choice.

With this scenario, we had done preliminary work mainly designed to identify major issues to be discussed at the workshop. A small part of the supply chain process was modeled using the Process Handbook. A PIF representation of the process was also constructed independently. These two representations were then compared to identify the issues in their translation. Similarly, the same part of the process was independently represented in IDEF3 and its translation from the PIF representation was explored to identify the issues.

The result of this preliminary work was presented at the workshop. A brief summary can be found at The issues that came out of this work were then presented and discussed. The issues were:

#. Given a process that cannot be represented with just the PIF-CORE, how do we decide on the additional constructs needed?

In the preliminary work done so far, we used a fairly straightforward strategy of extracting the major verbs from the process description and turning them into constructs. Take the following process description for example: An order for goods is taken by the retailer using an order creation application which captures the order details. The major activities that can be extracted from this description then are: "take order," "use an order creation application" and "capture the order details."

However, before we introduce constructs for these activities, we need to consider what other concepts they presuppose. That is, given that a new construct has to be a specialization of some construct in the PIF-CORE, we would need to know what constructs might be needed between the actual construct needed for the scenario (e.g. TAKE ORDER) and its generalization in the core (e.g. ACTIVITY).

The strategy taken for the preliminary work was to use the Wordnet Given the most specific construct we needed (e.g. TAKE), we looked up its hypernym list and treated the items in the ascending list as a generalization hierarchy (e.g. TAKE -> GET -> TRANSFER POSSESSION -> ACTIVITY).

At the workshop, we discussed whether this strategy was a viable one or whether we could find a better one. Some questioned about the appropriateness of using the Wordnet, which was mainly designed for linguistic purpose, for our purpose of sharing process ontology. It was replied that the Wordnet was more a heuristic device, in the absence of better means, to be used to generate possible candidate intermediate generalizations from which to extract more principled generalization hierarchy. The CYC ontology was mentioned as another such heuristic device, but its currently released portion did not seem enough to be useful for our purpose.

An alternative strategy is not to worry about the intermediate generalizations until we have to. According to this strategy, a PSV extension would be created that contain just the specific constructs needed for the scenario process. These constructs, for now, would be immediate specialization of the core construct, ACTIVITY. In the course of representing additional processes and additional scenarios, other such specific extensions would be formed. Comparing these specific extensions, we would then induce common generalizations from them and form a separate extension. This strategy has the advantage that extensions are naturally induced from more concrete extensions. However it was pointed out that we do not have enough time and resources for such natural induction and that it needs a mechanism of gracefully injecting intermediate nodes in the generalization hierarchy at any time without severely affecting the ways in which the existing modules are used for translation.

Based on these considerations, we will then continue to use the Wordnet to generate candidate generalization hierarchy for the constructs identified for the scenario. However, these candidates are to be treated only as such and any final hierarchy will have to have more principled justification.

#. How do we group the non-core constructs in a modular way?

Once we identify the constructs needed for the scenario, they need to be grouped in such a way that an extension module contains only those constructs that are mutually consistent and often used together. On the other hand, the user should be able to adopt a set of modules containing constructs needed without having to also adopt many that are not needed.

Again in the preliminary work done so far, the Wordnet provided an initial suggestion as to where to draw a module boundary. Each level in the hypernym list could be treated as a separate level in the module hierarchy. For example, the activities TAKE, GET, TRANSFER POSSESSION would belong to three modules at three successive levels respectively. And other constructs appropriate for these levels can then be grouped into these modules. For example, GIVE would belong to the module to which TAKE belongs to; PUT would belong to the module to which GET belongs to; and so on.

An alternative strategy, again, is to let the module formation take place in a natural, inductive way. First, all the constructs needed for a particular process would be grouped into a module. As more processes are represented, more modules are developed. Then constructs representing common concepts would be extracted from these modules into separate modules. Again, however, it was pointed out that we do not have enough time and resources for such natural induction and that we need a mechanism of gracefully injecting intermediate modules in the hierarchy of modules at any time without severely affecting the ways in which the existing modules are used for translation.

Another suggestion was made that the requirements compiled by the NIST Process Specification Language project be used as a guideline for the modules. The PSL requirements are grouped hierarchically and provide a more principled framework for modularizing the constructs. There is a question of whether they are fine-grained enough to provide specific guidelines. But the consensus was that it is worth trying to use them as the basis.

Yet another suggestion was made by Dr. Austin based on his years of work for specific purposes. The modules suggested in this scheme are:

1) resources and tools (relationships between objects and activities)

2) agents and actors (attributes between objects that can perform activities

and those activities)

3) objects (attributes of process relevant objects)

4) constraints

temporal constraints (constraints on and between time points)

object constraints (constraints on and between objects)

world state conditions/effects constraints

(generalization of activity inputs/outputs constraints)

authority constraints

resource constraints

spatial constraints

5) agent to agent organizational relationships

6) groupings

7) annotations

8) associated data (diagrams, etc)

#. What mechanism is used for registering, organizing, and accessing the extension modules?

The extension modules are to be written by domain experts distributed over space and time. We need a mechanism for letting the user know what modules already exist, how they are related so that the user can decide when and where a new module is needed, and how to prevent conflicts in definitions and naming.

The NIST Identification Collaboration Service (NICS) was demonstrated at the workshop to assess its usefulness for our purpose. It seemed useful for detecting and notifying the name conflict problem. However, not being designed for ontology management, it did not seem useful for supporting naming or browsing of hierarchical ontologies.

A suggestion was made that we do not worry about these issues until they actually become a problem - e.g. when there are over, say, a hundred modules. It was suggested that we focus more on developing a small number of modules that will be useful to many and exemplify the practice of writing and using extensions.

#. Should PIF be able to accommodate existing object ontologies?

In trying to represent the scenario process, we not only needed additional activities but also additional objects (e.g. ORDER, SOFTWARE). These objects may be defined in the extension modules. However, there exist several object ontologies such as CYC and Upper Penman Ontology. We saw the need for a PIF description to recognize objects from these external ontologies. For example, one may want to use an object exactly in the sense defined in one of these external ontologies. Or the way it is defined externally is good enough for one's purpose and not worth the effort of defining the object ontology within PIF. Therefore we decided that a PIF description may include an object with reference to an external ontology (including the information about the version used and how to access it ). The exact mechanism for doing it is yet to be discussed.

#. What are the dimensions along which the core can be extended?

One way in which the PIF CORE can be extended is obviously by adding to the content, i.e. by adding more constructs that specialize those in the core. However, content is not the only dimension. A module, for example, can stipulate a protocol different from that of the CORE. For instance, there may be a module that prohibits multiple inheritance, and any module that further specializes that module would be then restricted to singular inheritance. Along this dimension, a module may then restrict the core while another may extend the core - extension defined in terms of expressiveness. We also agreed that the core can often be extended by adding those constructs specifically needed for a translation between PIF and another representation (e.g. IDEF3 or Petrie Net).

In an attempt to articulate the types of extensions, we identified the following three dimensions: vocabulary, axioms, protocols (including syntax). A pure addition of vocabulary by introducing a set of definitions would be a conservative extension. A new set of axioms will extend the core non-conservatively either with or without a new set of vocabulary. The core can also be elaborated further by making meta-theoretic assumptions about the protocols. The module specifying the single inheritance would be an example of an extension defined with additional axioms without new vocabulary. The application specific module mentioned above would be a special case of either the new vocabulary or the new axioms or both.


There are several projects, such as Process Specification Language (PSL), Unified Modeling Language (UML), Core Plan Representation (CPR), and CDIF (CAD Data Interchange Format), all of which aim to support sharing of process-related data. Although the goals differ among these projects, it was decided worth articulating their relations and exploring ways to work with them.

Of these related projects, PIF has been working closely with PSL., whose working members include several from the PIF group. As a result, the development of the PSL has closely paralleled that of PIF, resulting in the PSL CORE that is quite compatible with the PIF CORE. Drs. Craig Schlenoff and Amy Knutilla from the PSL project who were able to join the workshop. In discussion with them, we decided that it would be mutually beneficial if the PIF and the PSL CORE were identified so that our future efforts can have a common basis and better be coordinated. Hence we tentatively proposed that the relapse of PIF 2.0 be identified with that of PSL 1.0. After that release, we could then coordinate our extension efforts, with the PSL focusing on the manufacturing side and the PIF focusing on the business process domain. Such a division of labor would not only bring about effective use of resource but also have the benefit of being able to compare our efforts, especially the generic extensions that the both groups would have to work out.

In discussion with Adam Pease from the CPR project who also attended the workshop, the CPR ontology was also found fairly compatible with the PIF ontology. The ARPI project, under which the CPR has been developed, has a renewed initiative for a common plan representation. The PIF project is now funded under the rubric of the ARPI project and a PIF working group member, Dr. Austin Tate, is one of the four "definition team" members of this initiative. Through this association as well as the compatibility with the CPR, the PIF should be able to influence the further development of the common plan representation.

In an additional effort to articulate the relation between PIF and the other related groups, we worked out the following scenario:

A business analyst is hired by an organization to work with management to suggest process improvements. The analyst begins by modeling the current processes with a modeling tool that utilizes the UML language (and possibly the UML process-specific extensions). This tool was specifically designed to support business process modeling. The UML diagrams are used as a two-way communication device between the analyst and members of the organization. The analyst then identifies various processes where he/she believes there may be a need for improvement. The analyst turns to the process handbook to import ``best-practice'' processes that may be used to replace the existing inefficient ones. In order to accomplish this, the selected PH processes are translated to PIF. Then the PIF representation is translated to CDIF. CDIF is the declarative, textual representation that can be used as input for the modeling tool. Once imported, these processes are integrated with the existing UML presentation. The team next decides to run a simulation of these processes for specific case scenarios. They own a simulation package that can accept a declarative model input (Witness).

The modeling tool is now used to export the UML process models to CDIF. A CDIF-PIF translator transforms the description to PIF. Another translation is then made between PIF-IDEF3. The IDEF3 model (using Dr. Chris Menzel's extension to the IDEF3 elaboration language) is imported into ProSIM. As ProSIM is specialized for simulation, the model is detailed/checked for simulation support. ProSIM's witness compiler is then used to build the witness model that will be used for simulation. During simulation, modifications may be made to the process structures. Once the team is satisfied with the overall results of the simulation, the processes are exported back to the IDEF3 representation. This is then translated through PIF, back to CDIF, and is imported back into the process modeling tool. The analyst hands the UML models off to management as the deliverable of his consultation.

It was concluded that we should make these groups aware of our willingness to work with them. The scenario above could then be used as a basis for clarifying their relation with PIF and/or for collaboration with them.