Skip to main content
Version: 1.x

Example Reports & Data Architecture

Praman generates 12 dashboard reports that provide a comprehensive view of SAP S/4HANA quality across functional testing, performance, accessibility, integration, and geographic readiness. This guide explains how each report gets its data and what Praman is responsible for producing.

important

These reports are examples of what you can design and develop. Please read the documentation help for more details.

The 5 Data Source Categories

Every field in the quality dashboard traces back to one of 5 source categories:

#SourceWhat It Is
1Playwright Test ExecutionDirect output from Playwright + Praman running tests against SAP Fiori / UI5
2SAP Backend TelemetryData extracted from SAP monitoring transactions (ST03N, STAD, SM37) via OData or RFC
3SAP Focused Run / Web AnalyticsReal User Monitoring (RUM) data from SAP observability tools
4Third-Party System APIsHealth and transaction data from connected external systems
5Configuration / Manual InputBusiness metadata maintained by the project team (process owners, SLAs, revenue)

Source Category Deep Dive

Source 1: Playwright Test Execution (Praman)

This is what Praman directly produces. Every Playwright test run generates:

Per-test-step data:

  • Transaction response time (page.evaluate + UI5 rendering + network roundtrip)
  • Pass / fail / blocked status with error message and screenshot
  • Memory consumption (via performance.measureUserAgentSpecificMemory())
  • CPU time (via PerformanceObserver long task monitoring)
  • Network timing (via PerformanceResourceTiming API)
  • DOM snapshot and UI5 control tree state at point of failure

Per-test-suite data:

  • Total scenarios executed, passed, failed, blocked
  • Execution duration (wall clock)
  • Browser/viewport/locale configuration used
  • Playwright trace file (.zip) for debugging

Accessibility scan data (via @axe-core/playwright):

  • WCAG violations per page with rule ID, severity, DOM selector, HTML snippet
  • Principle classification (perceivable / operable / understandable / robust)
  • Screen reader compatibility assertions
  • Keyboard navigation assertions (focus trap detection, tab order)
  • Color contrast ratios (computed from rendered styles)

Integration test data:

  • HTTP response codes from OData/REST calls during test execution
  • CPI iFlow message IDs triggered during test
  • Response latency of each external API call intercepted by Playwright's route handler
  • Data integrity assertions (field values match expected after cross-system flow)

How Praman captures SAP-specific metrics:

// Inside Playwright test, Praman uses page.evaluate() to call UI5 native APIs
const metrics = await page.evaluate(() => {
// UI5 performance measurement
const perfMon = sap.ui.require('sap/ui/performance/Measurement');
const measurements = perfMon.getAllMeasurements();

// OData request timing
const oDataModel = sap.ui.getCore().getModel();
const pendingRequests = oDataModel.getPendingChanges();

// Browser performance API
const navTiming = performance.getEntriesByType('navigation')[0];
const resourceTiming = performance.getEntriesByType('resource');

return { measurements, navTiming, resourceTiming };
});
Source 2: SAP Backend Telemetry

Extracted from SAP via OData services, RFC calls, or scheduled ABAP reports:

SAP TransactionWhat It ProvidesHow to Extract
ST03N (Workload Monitor)Per-TCode response time breakdown, peak load periods, user countsOData or RFC: SWNC_GET_WORKLOAD_SNAPSHOT
STAD (Statistical Records)Dialog step detail: frontend roundtrip, network, memory per stepRFC: SAPWL_STATREC_READ
SM37 (Job Monitor)Batch job execution times, status, start/end timestampsOData or RFC: BAPI_XBP_JOB_STATUS_GET
SM21 (System Log)Runtime errors, dumps, resource exhaustion eventsRFC: BAPI_XMI_LOGOFF pattern
SLG1 (Application Log)Business-level errors (posting failures, workflow errors)OData: Application Log API
SM59 (RFC Destinations)Connection status to external systems, latencyRFC: RFC_SYSTEM_INFO
SE16/CDS ViewsMaster data record counts, migration verificationOData CDS View exposure
Source 3: SAP Focused Run / Web Analytics

Real User Monitoring data — tells you what actual users experience vs what tests simulate:

ToolData Provided
Focused Run RUMBrowser type, OS, screen resolution, client-measured page load, network latency by geography, connection type
SAP Web AnalyticsGeographic user distribution, locale preferences, Fiori app usage patterns, session duration

Praman reads this data to set up browser contexts that match actual user conditions:

// Praman configures Playwright context from Focused Run data
const context = await browser.newContext({
locale: 'de-DE', // from Web Analytics
viewport: { width: 1920, height: 1080 }, // from Focused Run RUM
geolocation: { latitude: 50.1, longitude: 8.7 },
});
Source 4: Third-Party System APIs

Health and transaction data from connected external systems:

SystemWhat to QueryAPI/Method
SAP CPIiFlow execution logs, message status, error detailsCPI OData API: /api/v1/MessageProcessingLogs
ERP IntegrationsConnection status, OAuth token validity, sync queue depthREST API health endpoints
Banking PartnersFile delivery status, payment status codesSFTP + EBICS/SWIFT status
Logistics CarriersShipment API response time, label generation success rateCarrier tracking APIs
Source 5: Configuration / Manual Input

Business metadata that doesn't come from any system — maintained by the project team:

  • Process names, owners, business impact classification (critical/high/medium/low)
  • Revenue-at-risk estimates per process
  • SLA targets (response time thresholds, success rate targets)
  • Rollout wave assignments (which countries in which wave)
  • Company codes, currency, locale mappings
  • Regulatory requirements (EAA, Section 508, etc.)
  • Interface catalog (which interfaces exist, their criticality, message types)
  • Role-to-process mappings, user counts per role

The 12 Dashboard Reports

Click any report below to view its data model and field-by-field source mapping.

#ReportPrimary Sources
1Executive Steering Committee DashboardPlaywright, Config
2Business Process Deep DivePlaywright, Config
3Role-Based ReadinessPlaywright, Config, Web Analytics
4Data Migration QualityPlaywright, SAP Backend
5Risk Register (CFO Report)Playwright, Config
6How It WorksStatic
7Performance HeatmapPlaywright, SAP Backend, Config
8Geographic & Customer ImpactPlaywright, Focused Run, Config
9Real User Experience SimulationPlaywright, Config
10SAP FI — Financial Accounting QualityPlaywright, SAP Backend, Config
11WCAG 2.1 AA Accessibility CompliancePlaywright (axe-core), Config
12Interface & Integration Quality3rd Party, SAP Backend, Playwright
13End-to-End Cross-System QualityPlaywright, 3rd Party, Config

Report 1: Executive Steering Committee Dashboard

Data models: PROCESS_DATA, WEEKLY_TREND

FieldSourceDetail
id, name, ownerConfigBusiness process catalog
tcodesConfigTest scenario definitions
totalScenariosPlaywrightCount of test specs in suite
passed, failed, blockedPlaywrightTest execution results
businessImpactConfigBusiness classification
revenueAtRiskConfigFinance team estimate
usersAffectedWeb AnalyticsUser count per process
failureDetails[].scenarioPlaywrightTest spec name that failed
failureDetails[].severityConfigPre-classified per test scenario
WEEKLY_TREND[].readinessPlaywrightComputed: (passed / total) x 100
WEEKLY_TREND[].defectsOpenPlaywright + ConfigOpen defect count from failures + bug tracker
Report 2: Business Process Deep Dive

Data models: PROCESS_DATA (same as Report 1, different visualization)

All fields same as Report 1 — this report is a different view of the same data, showing per-process drill-down with TCode-level test results.

Report 3: Role-Based Readiness

Data models: ROLE_READINESS

FieldSourceDetail
roleConfigSAP role catalog (PFCG roles mapped to business roles)
processConfigRole-to-process mapping
readinessPlaywright% of test scenarios passing that this role executes
usersWeb AnalyticsActive user count for this role
criticalPathsConfigNumber of critical test paths assigned to this role
blockedPlaywrightCount of critical paths blocked/failing for this role

How readiness is computed: Each test scenario is tagged with the SAP roles that execute it. A role's readiness = (passing scenarios for that role) / (total scenarios for that role) x 100.

Report 4: Data Migration Quality

Data models: MIGRATION_DATA

FieldSourceDetail
entityConfigMigration object catalog
recordsSAP BackendRecord count from CDS view
validatedPlaywright + SAPPraman runs validation queries via OData CDS views
errorsPlaywright + SAPRecords failing validation rules
criticalPlaywright + SAPSubset classified as blocking
statusPlaywrightComputed from error rate thresholds
// Praman validates migration data via OData CDS views
const response = await request.get(
'/sap/opu/odata/sap/Z_MIGRATION_VALIDATION_CDS/CustomerValidation?$filter=HasErrors eq true',
);
Report 5: Risk Register (CFO Report)

Data models: Derived from PROCESS_DATA, MIGRATION_DATA, PERF_DATA

FieldSourceDetail
Risk itemsPlaywright + ConfigAuto-generated from failing tests, blocked scenarios, SLA breaches
Revenue at riskConfigFinance team estimates linked to process failures
Mitigation statusConfigManual tracking of remediation progress

This report is fully derived — no unique data model. It aggregates failure data from other reports and overlays business context.

Report 6: How It Works

Data models: SAP_DATA_SOURCES

Static informational page describing the testing methodology. No dynamic data.

Report 7: Performance Heatmap

Data models: PERF_DATA (processes, transactions, batchJobs, targets)

FieldSourceDetail
Process-level
statusPlaywrightComputed from transaction performance vs SLA
e2eTimePlaywrightSum of all transaction avg times
avgMemoryPlaywrightperformance.measureUserAgentSpecificMemory()
avgCpuPlaywrightPerformanceObserver long-task monitoring
avgNetworkPlaywrightPerformanceResourceTiming network component
Transaction-level
avgTimePlaywrightMean response time across all test executions
p95TimePlaywright95th percentile response time
slaConfigSLA target per transaction
eccBaselineSAP Backend (ST03N)Historical ECC response time
trendPlaywrightCompare current avg to previous runs
samplesPlaywrightTotal test executions for this TCode
Batch Jobs
s4TimeSAP Backend (SM37)Job runtime from SM37 job log
eccTimeSAP Backend (ST03N)Historical ECC batch job runtime
memory, cpuSAP Backend (ST06)Server-side resource consumption
info

Transaction response times come from Playwright (client-side), while batch job times come from SAP SM37 (server-side). Praman measures what the user experiences; SM37 measures what the server does.

Report 8: Geographic & Customer Impact

Data models: GLOBAL_REGIONS, SAP_DATA_SOURCES

FieldSourceDetail
Country metadata
usersWeb AnalyticsActive user count per country
companyCode, locale, currencyConfigSAP org structure
waveConfigRollout wave assignment
peakHourSAP Backend (ST03N)Peak usage hour from workload monitor
Network profile
network.latencyFocused Run RUMClient-to-server latency per geography
network.bandwidthFocused Run RUMMeasured throughput
Browser/Device profile
browser.ChromeFocused Run RUMBrowser market share per country
device.desktopFocused Run RUMDevice type distribution
Simulation results
simulation.transactions[].optimalSAP Backend (ST03N)Baseline from low-latency location
simulation.transactions[].simulatedPlaywrightMeasured time with country network/locale profile
simulation.overallScorePlaywrightWeighted score from all simulated transactions
simulation.avgDegradationPlaywright((simulated - optimal) / optimal) x 100
// For each country, Praman creates a context matching real conditions
const deContext = await browser.newContext({
locale: 'de-DE',
timezoneId: 'Europe/Berlin',
viewport: { width: 1920, height: 1080 },
geolocation: { latitude: 50.1, longitude: 8.7 },
});
// Run same test scenarios — difference = geographic degradation
Report 9: Real User Experience Simulation

Data models: GLOBAL_REGIONS, LOCALE_VALIDATION, BROWSER_COMPAT

FieldSourceDetail
LOCALE_VALIDATION[].dateFormatPlaywrightValidated by entering dates and checking rendering
LOCALE_VALIDATION[].numberFormatPlaywrightValidated by entering amounts and checking formatting
BROWSER_COMPAT[].browserConfigBrowser matrix for testing
BROWSER_COMPAT[].statusPlaywrightFull suite execution on this browser
BROWSER_COMPAT[].issuesPlaywrightCount of browser-specific failures

Same test suite run across Chrome, Firefox, WebKit (Safari), Edge with different locale settings and responsive viewport testing.

Report 10: SAP FI — Financial Accounting Quality

Data models: FI_QUALITY_DATA

FieldSourceDetail
Document posting tests
postingScenariosConfigFI posting test catalog (FB01, F-02, VF01, MIRO)
passed, failed, blockedPlaywrightTest execution results per posting type
responseTimePlaywrightTime to complete posting including SAP roundtrip
Balance verification
trialBalance.matchPlaywright + SAPAutomated comparison after posting
intercompanyBalancePlaywright + SAPIC reconciliation verification
Period-end close
closingTasks[].statusPlaywrightAutomated tests for period-end transactions (F.01, S_ALR_87012284)
closingTasks[].runtimeSAP Backend (SM37)Batch job execution time
Tax & compliance
taxCalculation.accuracyPlaywrightTax determination results vs expected
withholdingTax.statusPlaywrightWHT posting and reporting tests
regulatoryReportsConfigCountry-specific reporting requirements
Report 11: WCAG 2.1 AA Accessibility Compliance

Data models: ACCESSIBILITY_DATA

FieldSourceDetail
apps[].scorePlaywright (axe-core)Weighted score from violation count and severity
apps[].violationsPlaywright (axe-core)Total axe-core violations detected
apps[].critical, serious, moderatePlaywright (axe-core)Violations by severity level
apps[].screenReaderPlaywrightARIA roles, live regions, focus management
apps[].keyboardPlaywrightTab reachability, focus traps, Esc behavior
violationsByType[].rulePlaywright (axe-core)axe rule ID (e.g. color-contrast)
violationsByType[].principlePlaywright (axe-core)WCAG principle mapping
regulatoryRequirements[]ConfigLegal/compliance team input (EAA, Section 508)
import AxeBuilder from '@axe-core/playwright';

test('Fiori app accessibility', async ({ page }) => {
await page.goto('/sap/bc/ui5_ui5/.../FioriLaunchpad.html#PurchaseOrder-manage');
await page.waitForSelector('sap-ui-content');

const results = await new AxeBuilder({ page })
.withTags(['wcag2a', 'wcag2aa', 'wcag21aa'])
.analyze();

// Produces: violations[], passes[], incomplete[]
// Each violation: id, impact, description, nodes[]{html, target, failureSummary}
});
Report 12: Interface & Integration Quality

Data models: INTERFACE_DATA

FieldSourceDetail
Interface catalog
id, name, typeConfigIntegration team documentation
source, target, directionConfigIntegration architecture
frequency, criticalityConfigBusiness classification
dailyVolume3rd Party + SAPCPI message log count
avgLatency3rd Party + PlaywrightAPI timing or CPI processing duration
errorRate3rd Party + SAPError messages / total messages
statusPlaywright + 3rd PartyComputed from errorRate + latency vs SLA
Third-party system health
thirdPartySystems[].status3rd Party + PlaywrightAggregated from interface statuses
thirdPartySystems[].uptime3rd Party + SAPCPI message success rate over rolling window
Stability history
stabilityHistory[].totalUp, totalDownPlaywright + 3rd PartyWeekly pass/fail aggregation
interfaceStability[].weeksDownPlaywrightConsecutive weeks failing
Report 13: End-to-End Cross-System Quality

Data models: E2E_SCENARIOS

FieldSourceDetail
id, name, processConfigScenario catalog
statusPlaywrightOverall: fail if any step fails
criticalityConfigBusiness classification
slaConfigEnd-to-end SLA target
revenueAtRiskConfigFinance team estimate
steps[].systemConfigSystem in the chain
steps[].statusPlaywright + 3rd PartyResult of testing this specific step
steps[].timePlaywrightMeasured execution time
steps[].interfaceConfigInterface ID (links to Report 12)
e2eTimePlaywrightSum of all step times
dataIntegrityPlaywrightValues match end-to-end

Data Flow Architecture

+-----------------------------------------------------------------+
| DATA SOURCE LAYER |
+---------------+--------------+-------------+--------+-----------+
| Playwright | SAP Backend | Focused | 3rd | Config |
| + Praman | ST03N/STAD | Run / Web | Party | YAML/JSON |
| + axe-core | SM37/SM59 | Analytics | APIs | files |
+---------------+--------------+-------------+--------+-----------+
|
PRAMAN AGGREGATION ENGINE
Collects > Normalizes > Computes Scores > Tracks History
|
OUTPUT: dashboard-data.json
Fed into React dashboard (Docusaurus / standalone)
+-----------------------------------------------------------------+

What Percentage Comes From Where?

Estimated across all ~180 unique data fields in the dashboard:

SourceField Count%What
Playwright + Praman~75 fields42%Test results, response times, pass/fail, accessibility scans, memory/CPU/network, E2E chain status
Configuration~55 fields30%Process catalog, SLA targets, business impact, interface catalog, role mappings, remediation guidance
SAP Backend~25 fields14%ECC baselines (ST03N), batch job times (SM37), record counts (CDS), peak hours
Focused Run / Web Analytics~15 fields8%Geographic distribution, network latency, browser/device stats
Third-Party APIs~10 fields6%External system health, CPI message logs, file delivery status

Implementation Priority

Phase 1 — Core (Reports 1-3, 5-7)

  1. Playwright test execution results — pass/fail/blocked, response times, error details
  2. Config files — process catalog, SLA targets, test scenario definitions
  3. Basic SAP telemetry — ST03N baselines for ECC comparison

Phase 2 — Depth (Reports 4, 8-9)

  1. Migration validation queries — CDS views for record counts and error detection
  2. Focused Run / Web Analytics — geographic profiles for realistic simulation
  3. Locale/browser matrix testing — multi-context Playwright execution

Phase 3 — Enterprise (Reports 10-13)

  1. axe-core integration — accessibility scanning per Fiori app
  2. CPI monitoring API — interface health, message volumes, error rates
  3. Third-party API health checks — external system connectivity validation
  4. E2E chain orchestration — multi-system flow testing with Playwright as orchestrator

What Praman Needs to Ship

Praman FeatureReports It Feeds
Test execution reporter (pass/fail/time/error)1, 2, 3, 5, 7
page.evaluate() SAP UI5 performance extraction7
Browser Performance API collection (memory/CPU/network)7, 8
Multi-context execution (locale/browser/viewport/network)8, 9
@axe-core/playwright integration11
HTTP request interception + timing (route handler)12, 13
CPI OData log fetcher12, 13
Config file parser (YAML/JSON process catalog)All
Historical result aggregator (weekly trends, stability)1, 7, 12
Dashboard data JSON generatorAll