Table of Contents |
---|
When you execute S2PX Analysis you will generate an Excel workbook which will contain a set of spreadsheets which help you understand how S2PX will handle your existing Server assets at the time of conversion. The sections below describe the information presented on each spreadsheet. Note that the sheets within the generated Excel workbook may be presented in an order different to that described below.
Note |
---|
Note that the Microsoft Excel Viewer for Windows will render charts incorrectly. Instead, use the full version of Excel to view the spreadsheet. |
S2PX Support |
---|
(This summary sheet derives is data from the Support Data sheet which you can inspect for further details.)
This sheet provides a high-level overview of the level of support S2PX will provide for converting your DataStage solution’s particular set of stage types.
Jobs |
---|
The percentage of jobs in the supplied ISX file that contain…
Stage types supported by S2PX conversion. For Jobs to be classified in this segment all of its stage types must be supported by S2PX. In the example shown here we see that approximately 95% of Jobs in this ISX are supported by S2PX conversion.Status colour Green title SUPPORTED
Stage types that may be supported by S2PX conversion in the near future. If you encounter a large volume of Jobs in this category then speak to your IBM representative about the timing for introducing S2PX conversion support for relevant stage(s). For Jobs to be classified in this segment all of its stage types must be supported and at least one must be classified as TBD, but none of the stage types can be classified as unsupported. Any instance of an unsupported stage type means the entire Job is classified as…Status colour Yellow title TBD
Stage types not supported by S2PX conversion. If a Job contains a at least one unsupported stage then it is classified as unsupported.Status colour Red title UNSUPPORTED
A description of which stages are currently supported and which are still in the planning stage is available here.
Number of unsupported Stages by Status
A colour-coded breakdown of the number of
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
UniVerse Data Access
stage of which 16 instances were encountered.Unsupported Jobs - Details
Lists the individual Jobs containing
Status | ||||
---|---|---|---|---|
|
Status | ||||
---|---|---|---|---|
|
Find_Untraced_Jobs_DB2
is the Jobs which will require the greatest attention due to it containing 6 instances of unsupported stages. The exact names and types of these stages are described in the associated Support Data spreadsheet. Jobs Remediation
(This summary sheet derives is data from the Remediation Data sheet which you can inspect for further details.
Info |
---|
The Remediation Data sheet also includes web links to the relevant S2PX documentation pages which provide further details on each of the listed remediation issues. |
This sheet details the level of manual intervention that will be required by humans to the Parallel Jobs generated by S2PX.
Jobs Remediation - Summary
The summary graph describes how many Server Jobs will convert to Parallel Jobs which are expected to replicate Server behaviour without problems.
In this example we can see that S2PX expects that…
Approximately 45% of Jobs will convert to Parallel Jobs which are function identically to their Server equivalents.Status colour Green title Not Required
The remaining approximately 55%, however, will require some level of manual intervention from a human before they can replicate the behaviour their Server ancestors.Status colour Yellow title Required
There are a number of reasons why manual intervention may be required, but is often because a Server Job uses one or more capabilities for which a direct equivalent in the DataStage Parallel engine is simply not available. In some cases the user may be required to re-interpret the original Job requirements to identify whether the compromise solution presented by S2PX will suffice, or whether further enhancement of the generated Jobs(s), using native Parallel functionality, is required.
JobsRemediation |
---|
This section describes the details of the required interventions identified in the summary graph above. The Issue requiring remediation column lists, in order of descending number of instances encountered, the S2PX Asset Query which identified the issue. The documentation for each of these Asset Queries (linked in the associated Remediation Data sheet) suggests a required course of remediation action, where one is require.
In this example we can see that the converted ISX has 183 instances (across 106 distinct Server Jobs) of the Disable Schema Reconciliation issue. In this case a remediation is likely not required but particular attention must be paid to these Jobs during Parallel Job testing.
We can also see 130 instances of a Sequential File with Backup issue across 102 distinct Server Jobs. The actual functionality of the Job itself isn’t affected but for some customers the loss of Sequential File backup functionality maybe unacceptable, and so an alternative solution (of which many are available) should be sought.
The Reserved Words in Transformer Stages issue, as a further example, is one where every instance discovered will require human intervention to resolve as these Jobs will not compile in their current form, and S2PX will not attempt to generate new variable names for your Job as the existing name a likely to encode meaning which is useful for Job developers.
Function Calls
...
Function |
---|
...
Calls |
---|
The Summary graph at the top identifies the proportion of Server Jobs which use function calls which are expected to work without further intervention (because an identically-named Parallel equivalent function already exists) and those which are expected to require some level of human involvement.
In the example here we see…
Just over 50% of Jobs feature one or more function calls of which all are natively supported in target DataStage environment (v11.7.1.3 SP4 or greater).Status colour Green title builT-in
Slightly less than 50% percent, however, have no Parallel equivalent - either because they are IBM-supplied DataStage Server functions for which no Parallel version exists, or because they are custom functions you have coded yourself in DataStage Basic. In this case you will need to configure S2PX with one or more custom routine mappings.Status colour Yellow title RequiRes mapping
Function or Routine Name - Details
The table below lists all function calls which S2PX doesn’t understand, and for which some level of manual conversion will be required. This table of functions names, ordered by descending number of distinct calls, also details the number of distinct Jobs from which calls originate.
...
...
Connector Migration |
---|
...
Jobs requiring Connector Migration
Percentage of jobs involving Connectors (a component that provides data connectivity and integration for external data sources, such as relational databases or messaging software) that are classified as legacy components and need migration using IBM’s Connector Migration Tool ('CCMT') solution.
Stage Types requiring Connector Migration
A breakdown of the number of stages requiring conversion by stage type, ordered (left-to-right) by descending number of instances discovered in your ISX file.
Jobs requiring Connector Migration
A table of Jobs along with the number of stages on each job requiring migration, order by descending number of stages requiring migration.Conversion |
---|
...
Advisory |
---|
...
Info |
---|
The Advisory Data sheet also includes web links to the relevant S2PX documentation pages which provide further details on each of the listed remediation issues. |
The list is broken down by Stage Type, and ordered arbitrarily.
Prevalence of Hashed Files in Jobs - Summary
The Summary graph at the top simply identifies the proportion of Jobs which make use of one or more Hashed Files stages. Jobs are categories as simply
Jobs referencing Hashed Files (
), orStatus colour Yellow title yellow Jobs not referencing Hashed Files (
)Status colour Green title green
Hashed File Usage Patterns - Summary
The next summary graph identifies, for each Hashed File reference, whether that reference is…
RED Shared between Jobs or Job invocations. Identifies Hashed File instances in a Job which are…
Reading but not writing, meaning the Hashed File has been generated by an upstream Job, or
Writing but not reading, meaning the Hashed File is being produced for consumption by a downstream Job, or by a different invocation of the same Job.
ORANGE Hashed File Appending Data. Identifies Hashed File instances which append data, meaning the Hashed File already existed at the point of Job invocation, and is being appended to produce data for consumption by a downstream Job, or by a different invocation of the same Job.
YELLOW Hashed File Synchronisation. Identifies Hashed File instances accessing the same Hashed File from multiple stages on the same canvas, potentially introducing a synchronisation issue that could require manual effort to implement in a Parallel environment.
GREEN Hashed File Transient, Exclusive. Identifies ephemeral Hashed Files, seemingly created, written, and read exclusively within the same job, and which appear to used in a transient manner to act simply as a synchronisation point.
Hashed File Instances per Job - Details
A list which breaks out the Hashed File Usage patterns summary above by Job, ordered by descending frequency of usage.Hashed Files |
---|
(This summary sheet derives is data from the Hashed File Data sheet which you can inspect for further details.)