CESNET - Provenance Challenge Member Page
Work in progress
Participating Team
- Short team name: CESNET
- Participant names:
- Project URL: http://egee.cesnet.cz/en/JRA1/
- Project Overview: Job Provenance (JP for short) is a part of the gLite Grid middleware implementation
- Provenance-specific Overview: JP is a job centric system. The Grid job is the primary entity of interest,
all data are organised on a per-job basis.
JP collects data about job life cycle including job inputs and outputs, infrastructure state and user annotations.
- Relevant Publications:
- IPAW'06 presentation and paper gLiteJob Provenance.
- CGW'05 presentation and paper Services for Tracking and Archival of Grid Job Information.
- CHEP'04 poster Distributed Tracking, Storage, and Re-use of Job State Information on the Grid.
- slides from the workshop
See also references and glossary at the bottom of this page.
Workflow Representation
Job Provenance was developed as a part of the gLite middleware.
Despite its design is more general, capable of handling virtualy any Grid jobs,
the current implementation supports only gLite jobs,
and we use gLite to implement the Provenance Challeng workflow.
Therefore we provide a brief overview of relevant parts of job processing in gLite
before the actual description of the workflow implemetation.
gLite job processing in a nutshell
The
job is the only way the user can access computational resources in gLite.
Despite not completely restricted to, gLite is designed to support traditional batch, i.e. non-interactive jobs.
Upon creation the job is assigned a unique immutable
Job Identifier (
JobId?).
The
JobId? is used to refer to the job all the time during the job life and afterwards.
The user describes the job (i.e. executable, parameters, input files etc.) using the
Job Description Languate (JDL),
using the extensible
Classified Advertisement (classad) syntax.
The description may grow fairly complex, including requirements on the execution environment,
proximity of input and output storage etc.
Processing of the job can be summarised as follows:
- the job is submitted via a User Interface (either command line or graphical)
- Workload Manager (WM) queues the job and starts finding a suitable Computing Element to execute it
- the job is passed to the chosen Computing Element and runs there
- after completition, the user can retrieve the job output
- all the time, the job is tracked by Logging and Bookkeeping (LB) service, providing the user
with the view on the job state and further details
- after retrieving the job output all the middleware data (namely the job trace in LB)
on the job are passed to Job Provenance and purged in their original locations
- annotations can be added to the job in the form of tags (name = value pairs) during its life time via LB
(even from inside of the running application) or any time afterwards via JP
Besides simple jobs gLite supports also complex ones, job workflows in the form of
Directed Acyclic Graphs (DAG).
A DAG is completely described, using a nested JDL syntax,
as a set of its nodes (simple jobs) and execution dependencies among them.
DAG processing is implemented by interfacing the WM planning machinery with the
Condor DAGMan.
TODO: references JDL, WMS, LB
Challenge workflow
We implement the challenge workflow as a gLite DAG job.
The structure of the DAG follows the specified workflow exactly, with the following mapping:
- procedures become nodes of the DAG, i.e. they are turned into normal gLite jobs during the DAG processing,
and executed on the Grid computing resources.
Besides down- and uploading the data files (see bellow)
each such job involves running the appropriate AIR, FSL, or ImageMagic? utility.
- dependencies among procedures are reflected in dependencies of the DAG.
Therefore e.g. all four
align_warp
invocations can run in parallel
but softmean
must be preceeded by successfull completition of all four reslice
instances.
- data items, both input and output, are external files wrt. the workflow implementation because a unified shared filesystem
cannot be exepected on the Grid computing resources.
Therefore each job is responsible for downloading all its inputs and uploading all its outputs.
In our experimental runs
we put the files on a dedicated
GridFTP? server and access (both down- and upload) with
the
gsiftp://
protocol (solving also access control -- a running gLite job possesses delegated user credentials).
Consequently, the data items are identified with their full URL in our implementation.
We might have used the gLite data services, identifying files with GUID's or logical file names.
However, this approach would make the implementation more obscure while not exhibiting any
important provenance features.
We provide a
template for the workflow JDL.
It contains placeholders for the data files,
details on instantiating and submitting it with gLite command-line tools can be found
at
this page.
Provenance Trace
Upload a representation of the information you captured when executing the workflow. Explain the structure (provide pointers to documents describing your schemas etc.)
As noted above, when the execution of workflow is finished, the JP service can collect traces
of the workflow's life from various Grid subsystems.
Currently only LB is instrumented to provide the trace, however,
the encompassed data are rich and completely sufficient for the challenge.
The LB trace is uploaded as a raw LB dumpfile, three sample snapshots are available
here (files
dump[123]
).
JP provides the user with an interface to retrieve such raw files, and their format is public in principle
(
NetLogger? ULM according to
draft-abela-05,
LB specific fields are documented in
LB User's Guide).
However, access to the raw files is not supposed to be a typical JP usage.
On the contrary,
the end user of JP sees all that available data transformed into the form of logical
JP attributes,
"namespace:name = value" pairs.
Attribute values
are digested from the raw traces JP plug-in modules, hiding internal structure, syntax, format version, and other implementation
details.
At this level the provenance trace of executed workflow is represented by a set of JP attributes
and its values assigned to both the workflow and all its subjobs (nodes).
There are the following classes of attributes:
- JP system ones (namespace
http://egee.cesnet.cz/en/Schema/JP/System
):
-
jobId
-
owner
: identity of the job submitter
-
regtime
: when the job was registered with the middleware
- digested from LB trace, conforming to schema =http://egee.cesnet.cz/en/Schema/LB/JobRecord=
- digested from JDL, describing workflow structure (namespace
http://egee.cesnet.cz/en/Schema/JP/Workflow
):
-
ancestor
: JobId?(s) of immediately preceeding job in the workflow
-
successor
: JobId?(s) of immediately following jobs in the workflow
- unqualified user tags, logged via LB (see above); they are reported in the namespace
http://egee.cesnet.cz/en/WSDL/jp-lbtag
All the attributes may occur multiple times, e.g. as
softmean
must have been preceeded by 4 =reslice='s in the challenge workflow,
there are 4 occurences of
ancestor
attribute of the
softmean
nodes.
For the specific implementation of the challenge workflow we use LB user tags
to store additional information about the workflow nodes.
JP turns these values into attributes of the 4th kind on the list above.
The following table summarizes their meaning:
Attribute name | Attribute meaning |
IPAW_OUTPUT | Names of files generated by this node |
IPAW_INPUT | Names of input files for this node |
IPAW_STAGE | Name (number) of workflow stage of this node |
IPAW_PROGRAM | Name of process this node represent |
IPAW_PARAM | Specific parameters of this node processing |
IPAW_HEADER | Anatomy header property (global maximum in our case) |
Provenance Queries
How to query Job Provenance
Details on JP architecture, its components, dataflows among them,
and reasons that motivated the design are given in the cited references.
For understanding our implementation of the challenge queries one has to be only aware
that there are two distinct querying endpoints:
- JP Primary Storage (JPPS), where the data on jobs are stored permanently,
can be queried for any attribute of a particular job.
However, a concrete jobid must be known.
- JP Index Server (JPIS) is a configurable cache of subset of jobs and attributes.
It can search for jobs matching query criteria, specified as comparison
of an attribute with a constant value.
Concrete JobIds? needn't be known.
Both the querying endpoinds are exposed as web-service interface.
The challenge queries are implemented as Perl scripts which call elementary clients of both the services.
Our line of the
ProvenanceQueriesMatrix is here, the explanation of query status is part of each query description.
Teams | Queries |
Q1 | Q2 | Q3 | Q4 | Q5 | Q6 | Q7 | Q8 | Q9 |
CESNET team | | | | | | | | | |
Query #1
Find the process that led to Atlas X Graphic / everything that caused Atlas X Graphic to be as it is. This should tell us the new brain images from which the averaged atlas was generated, the warping performed etc.
Inputs
URL of the queried Atlas X Graphic file
Outputs
List of nodes (subjobs) of the workflow that contributed to the queried file:
- input and output files
- stage of the workflow, program name and parameter values
Implementation
The query is implemented as a graph search where the vertices are nodes of the DAG
and oriented edges are given by the ANCESTOR attribute.
The search is seeded with a JPIS query, retrieving
JobId? of the last node
of the workflow which produced the queried file directly, i.e. typically the
convert
utility.
Pseudocode:
- JPIS query: JobId? of DAG node having IPAW_OUTPUT = 'Atlas X Graphic'
- initialise
job_list
with the retrieved JobId?
- (graph search) while there are unprocessed elements in
job_list
- pick a list element
job
- JPPS query: all values of ANCESTOR attribute of
job
- insert each retrieved value into
job_list
unless it is already there
- for each element of
job_list
- JPPS query: attributes IPAW_INPUT, IPAW_OUTPUT, IPAW_PROGRAM, IPAW_PARAM, IPAW_STAGE
- sort
job_list
according to IPAW_STAGE
- pretty-print
job_list
, including all the retrieved attributes
Full implementation
Sample output
The output bellow is cut and reformated, here is the
original output.
$ ./query1.pl gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/blabla-x.gif 2>/dev/null
Results
=======
jobid https://skurut1.cesnet.cz:9000/hvkpZCsRsiqrxs5K_bo7Ew:
attr IPAW_STAGE: 5
attr IPAW_PROGRAM: convert
attr IPAW_INPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/blabla-x.pgm
attr IPAW_OUTPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/blabla-x.gif
attr CE: skurut17.cesnet.cz:2119/jobmanager-lcgpbs-voce
jobid https://skurut1.cesnet.cz:9000/02ZaAADKyebzggYPp4M9tA:
attr IPAW_STAGE: 4
attr IPAW_PROGRAM: slicer
attr IPAW_INPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/blabla.hdr
gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/blabla.img
attr IPAW_OUTPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/blabla-x.pgm
attr CE: skurut17.cesnet.cz:2119/jobmanager-lcgpbs-voce
jobid https://skurut1.cesnet.cz:9000/wGMnTvCILtiSTi7ZOQwfTQ:
attr IPAW_STAGE: 3
attr IPAW_PROGRAM: softmean
attr IPAW_INPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/anatomy1-resliced.img
...
attr IPAW_OUTPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/blabla.img
gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/blabla.hdr
attr CE: skurut17.cesnet.cz:2119/jobmanager-lcgpbs-voce
jobid https://skurut1.cesnet.cz:9000/9d0XMwfPuefR9woAFkDplQ:
attr IPAW_STAGE: 2
attr IPAW_PROGRAM: reslice
attr IPAW_INPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/anatomy3.warp
...
attr IPAW_OUTPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/anatomy3-resliced.img
...
attr CE: skurut17.cesnet.cz:2119/jobmanager-lcgpbs-voce
jobid https://skurut1.cesnet.cz:9000/RglBtUz0IzwSeM32KLnHPg:
attr IPAW_STAGE: 2
attr IPAW_PROGRAM: reslice
attr IPAW_INPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/anatomy4.warp
...
...
jobid https://skurut1.cesnet.cz:9000/wdWQHL0-RXkd3VeNcSrTaw:
attr IPAW_STAGE: 2
attr IPAW_PROGRAM: reslice
attr IPAW_PARAM:
attr IPAW_INPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/anatomy1.warp
...
...
jobid https://skurut1.cesnet.cz:9000/xwIsN2JgGfsRuvYwh0QXsw:
attr IPAW_STAGE: 2
attr IPAW_PROGRAM: reslice
attr IPAW_INPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/anatomy2.warp
...
...
jobid https://skurut1.cesnet.cz:9000/yM3sz8v6WCIPgi5-0m8L4w:
attr IPAW_STAGE: 1
attr IPAW_PROGRAM: align_warp
attr IPAW_PARAM: -m 12, -q
attr IPAW_INPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/anatomy4.img
gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/reference.img
attr IPAW_OUTPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/anatomy4.warp
attr CE: skurut17.cesnet.cz:2119/jobmanager-lcgpbs-voce
jobid https://skurut1.cesnet.cz:9000/s47ihjBHQXqPkkNwA2iazg:
attr IPAW_STAGE: 1
attr IPAW_PROGRAM: align_warp
attr IPAW_PARAM: -m 12, -q
attr IPAW_INPUT: gsiftp://umbar.ics.muni.cz:1414/home/mulac/pch06/anatomy2.img
...
...
...
Comments
In the implementation we trade off performance for readability.
Namely, with suitable configuration of JPIS, all the JPPS queries,
which may easily become a bottleneck of the whole system, could be avoided.
Moreover, the queries could be combined together in order to retrieve
all attributes of a job in a single hit.
Query #2
Find the process that led to Atlas X Graphic, excluding everything prior to the averaging of images with softmean.
Inputs
URL of the queried Atlas X Graphic file
Outputs
Same as for Query #1
Implementation
Exactly the same as Query #1, with the graph search cut once a node with IPAW_PROGRAM = 'softmean' is found
Full implementation
Sample output
Almost the same as Query #1, with only nodes up to
softmean
.
Available here.
Query #3
Find the Stage 3, 4 and 5 details of the process that led to Atlas X Graphic.
Inputs
URL of the queried Atlas X Graphic file
Outputs
Same as for Query #1
Implementation
Exactly the same as Query #1, with the final output filtered to contain only
jobs having IPAW_STAGE one of 3, 4, 5.
Full implementation
Sample output
Almost the same as Query #1, with only nodes having IPAW_STAGE one of 3, 4, 5.
Available here.
Comments
The implementation is not optimal but more general, we do not impose any special semantics
on the value of the IPAW_STAGE attribute.
With the additional knowledge that a node is preceeded in the workflow only with nodes
of lower stage nuber, the search could be cut at IPAW_STAGE = 3, similarly to Query #2.
Query #4
Find all invocations of procedure align_warp using a twelfth order nonlinear 1365 parameter model (see model menu describing possible values of parameter "-m 12" of align_warp) that ran on a Monday.
Inputs
N/A
Outputs
Time, stage, program name, inputs, outputs of the matching workflow nodes
Implementation
JPIS is queried for jobs matching IPAW_PROGRAM = 'align_warp' and IPAW_PARAM = '-m 12'.
Among the other attributes the job registration time is also retrieved,
and the output filtered to jobs that run on Monday.
Full implementation
Sample output
Unfortunately, we didn't manage to find any Monday jobs in our test database,
however, there are some
Thursday jobs ;-).
Comments
Job registration time, i.e. the submission time, is only an approximation
of running time (the job may have spent long time in a queue).
The actual job run time is available in the LB trace, though the current JP implementation
cannot extract it yet.
Therefore this is a technical only, not principal restriction.
The filter "ran on Monday" is quite challenging. Currently, we implement it at client side which is not
a scalable solution.
However, the JP concept foresees a solution of the issue via an already defined interface to
type plugin. A plugin, for a concrete type, defines the following methods:
- transformation of a value to a "queryable database form", which is stored at JPIS when a value of this type
arrives there (in addition to the literal value)
- user query comparison operators that transform a compared value into a SQL expression
Then, upon arrival to JPIS, weekday number would be extracted from the timestamp and stored in an extra database column.
The plugin would also define an operator
isWeekDay(x)
that would be tranformed at query time to an expression refering
to the new column.
Therefore the condition would be evaluated at the SQL level, i.e. in the most effective way.
Query #5
Find all Atlas Graphic images outputted from workflows where at least one of the input Anatomy Headers had an entry global maximum=4095.
Inputs
N/A
Outputs
List of Atlas Graphic files matching the query.
Implementation
JPIS is queried for jobs matching IPAW_HEADER = 'global_maximum 4095' (and IPAW_PROGRAM = 'align_warp' eventually).
The results of the query (
JobIds? of the matching jobs) are used to seed a graph search similar to
Query #1 but following the
successor
attribute of workflow's nodes rather than
ancestor
.
The output files of nodes having IPAW_STAGE = 5 are gathered and sorted to exclude multiple occurences.
Full implementation
Sample output
Available here
Comments
IPAW_PROGRAM = 'convert' can be used instead of IPAW_STAGE = 5 as a condition identifying the final output files.
Alternatively, they can be identified as outputs of nodes which have no successors.
The code can be also easily modified to record the graph traversal (details on workflow nodes)
leading to a particular file,
and display it with the file in a similar way as in previous queries.
Query #6
Find all output averaged images of softmean (average) procedures, where the warped images taken as input were align_warped using a twelfth order nonlinear 1365 parameter model, i.e. "where softmean was preceded in the workflow, directly or indirectly, by an align_warp procedure with argument -m 12."
Inputs
Outputs
Implementation
JPIS is queried to retrieve IPAW_PROGRAM = 'align_warp' jobs having IPAW_PARAM = '-m 12'.
The result is used to seed graph search, following the
successor
attribute.
The search is cut at IPAW_PROGRAM = 'softmean', and its outputs are printed.
Full implementation
Sample output
Comments
The actual implementation of this query takes a more efficient (though less intuitive)
approach to follow reversed graph edges via the
ancestor
attribute.
In this way, JPPS queries are completely avoided and the number of JPIS queries is minimised.
Query #7
A user has run the workflow twice, in the second instance replacing each procedures (convert) in the final stage with two procedures: pgmtoppm, then pnmtojpeg. Find the differences between the two workflow runs. The exact level of detail in the difference that is detected by a system is up to each participant.
We use Query #1 implementation to show details of the workflows.
Then the differences are apparent -- there is one more stage of the workflow,
and IPAW_PROGRAM attribute values of the two final stages are
pgmtoppm
and
pnmtojpeg
respectively.
Inputs
Atlas graphics file name.
Outputs
Formatted in the same way as for Query #1, while the different workflow nodes are displayed.
Implementation
The workflow is implemented using a modified
JDL template.
The query client is the same as for #1.
Sample output
TODO
Comments
Query #8
A user has annotated some anatomy images with a key-value pair center=UChicago. Find the outputs of align_warp where the inputs are annotated with center=UChicago.
Job Provenance gathers and organises information with the grid job being a primary entity of interest.
Despite annotations of a
job are its intrinsic part, direct anotations of
data are not.
Therefore this kind query is not supported.
Similarly to Query #9, we might introduce dummy "producer jobs" (i.e. having the particular data file assined as their output),
that would carry the annotation.
However, we consider this approach too artifitial.
Query #9
A user has annotated some atlas graphics with key-value pair where the key is studyModality. Find all the graphical atlas sets that have metadata annotation studyModality with values speech, visual or audio, and return all other annotations to these files.
As mentioned with Query #8, JP does not provide means of adding annotations to data directly.
However, annotations can be added to jobs (via JPPS interface) and it makes good sense
to consider job outputs to be annotated with the job annotations too.
Inputs
Value of the studyModality annotation.
Outputs
List of matching graphics files, together with their additional annotations.
Implementation
We assume the annotations to be assigned to whole workflows (i.e. not its subjobs) in the form JP attributes
in a dedicated namespace, e.g.
http://twiki.ipaw.info/Challenge/CESNET/Annotations
.
Pseudocode:
Sample output
TODO
Comments
Comment on the IPAW_STAGE = 5 from Query #5 fully applies here too.
Currently neither JPPS nor JPIS supports a query "all attributes of this job".
If the annotation names are not known a priori, the following approaches are possible:
- A simple workaround is storing all annotations in a similar way as IPAW_HEADER tag, i.e. an attribute holding both annotation name and value. However, this approach would not allow more complicated queries on the annotation values.
- A better workaround is attaching a known "Annotation" attribute to each job.
This attribute would hold names of all existing annotations for this job.
The user would first query for values of this attribute, and then choose which
real annotations (JP attributes) to retrieve.
- Extending JPPS query interface to support queries like "all attributes of this job falling into
namespace X".
Suggested Wokflow Variants
Suggest variants of the workflow that can exhibit capabilities that your system support.
Suggested Queries
Suggest significant queries that your system can support and are not in the proposed list of queries, and how you have implemented/would implement them. These queries may be with regards to a variant of the workflow suggested above.
Categorisation of queries
According to your provenance approach, you may be able to provide a categorisation of queries. Can you elaborate on the categorisation and its rationale.
Live systems
If your system can be accessed live (through portal, web page, web service, or other), provide relevant information here.
Further Comments
Provide here further comments.
Conclusions
Provide here your conclusions on the challenge, and issues that you like to see discussed at a face to face meeting.
References and glossary
Important terms used, its meaning in scope of Job Provenance, references to futher information.
Term | Meaning | References |
DAG | DAG means Directed Acyclic Graph, but in our case it is description of a set of jobs with structure (workflow) represented as a DAG | Condor project pages |
gLite | A Grid implementation currently developped in context of EGEE project | EGEE project, gLite middleware home, EGEE JRA1 home |
Filename | In our case a filename is represented by URL referencing the file in a GridFTP? server. |
JobId | By JobId? we mean here "Grid JobId?", logical name of job at gLite top level (it is not id in local batch system like LSF or PBS). |
Grid | Large-scale high performance distributed computing environments that provide access to high-end computational resources. | Grid computing dictionary Grid Scheduling Dictionary of Terms and Keywords |
--
CESNET JRA1 team
--
JiriSitera - 22 Aug 2006
to top