01.01.2026

5 Mins

Nerve Trace White Paper: Automated Quantification and Reporting of Intraepidermal Nerve Fiber Density (IENFD) from Digitized Skin Biopsies

Gabriel Viggers

Co-founder & CEO

Introduction

Small‑fiber neuropathy (SFN) is under‑recognized due to the limited sensitivity of nerve conduction studies and the laborious nature of epidermal nerve fiber counting. Skin biopsy with protein gene product 9.5 (PGP9.5) immunostaining and linear IENFD quantification at the distal leg is a guideline‑endorsed diagnostic method, but manual quantification is time‑consuming and variable. Nerve Trace is a secure, workflow‑integrated software platform that automates IENFD/ENFD detection from whole‑slide images (WSIs), enables expert review and editing, and produces compliant, versioned clinical reports with full analytical provenance. This paper presents the clinical rationale, algorithmic design, validation framework, informatics architecture, security and compliance controls, and limitations of Nerve Trace, situating the product against current standards and best practices for SFN diagnostics and digital pathology.

1. Background and Clinical Rationale

1.1 Small‑Fiber Neuropathy and IENFD

SFN affects unmyelinated C‑fibers and thinly myelinated Aδ fibers, producing pain, dysesthesias, and autonomic symptoms. Because standard electrophysiology primarily assesses large myelinated fibers, histological assessment of cutaneous small fibers by IENFD has become a key tool. Consensus guidelines endorse distal leg skin punch biopsy (≈10 cm above the lateral malleolus) with PGP9.5 immunohistochemistry and counting rules that include only fibers crossing the dermal–epidermal junction; secondary branching above the junction is not counted as additional fibers. Laboratories compare the measured IENFD (fibers/mm) against age‑ and site‑specific reference ranges to classify abnormality.

1.2 Operational Pain Points

Despite clinical value, IENFD is limited by: (i) manual counting burden (15–20 min/slide), (ii) inter‑/intra‑observer variability, (iii) inconsistent documentation of counting rules and reference ranges, and (iv) fragmented informatics (slides, annotations, metrics, and reports scattered across systems). Nerve Trace addresses these with algorithmic pre‑quantification, expert‑in‑the‑loop editing, standardized metrics/versioning, and compliant, e‑signable reports.

2. Nerve Trace Product Overview

Nerve Trace is a browser‑based, lab‑deployable platform comprising: (1) high‑performance WSI viewing and overlay rendering; (2) model‑assisted detection of epidermal boundary and epidermal‑crossing fibers; (3) interactive editing tools (add/remove fiber, adjust boundary, ROI exclusion); (4) real‑time metric recomputation (IENFD, fiber count, linear mm, confidence, QC flags); (5) report building with templates, snapshots, and e‑signature; and (6) end‑to‑end auditability. Recent enhancements add run‑scoped reporting (a self‑contained report per analysis run with pinned method/version) and a Patient File Cabinet to longitudinally organize run reports, signed PDFs, and snapshots by case and run version.

2.1 Key Value Propositions
  • Analytical assistance: AI‑assisted detection reduces manual effort while preserving clinician authority via editable overlays and diffs.

  • Reproducibility: Every metric is traceable to a specific analysis job, model/method version, pixel calibration, and user edit history.

  • Throughput: Median review time target < 4 min/slide with viewer performance ≥ 30 FPS and p50 inference < 5 min/slide.

  • Compliance: HIPAA‑aligned controls, 21 CFR Part 11‑inspired e‑signatures, immutable audit logs, and organization‑scoped access (RLS).

3. Methods

3.1 Specimen, Staining, and Imaging Assumptions

Nerve Trace assumes 3‑mm punch biopsies processed with PGP9.5 (± type IV collagen for basement membrane delineation), captured as SVS/NDPI/OME‑TIFF pyramidal WSIs with reliable pixel size metadata. Laboratory site and pixel calibration are required for density normalization. (Support exists for de‑identified mode.)

3.2 Computational Pipeline

Boundary segmentation. A convolutional (or U‑Net–family) model segments the dermal–epidermal junction; post‑processing enforces topological plausibility and smoothness.

Fiber detection. Instance segmentation or keypoint‑plus‑tracking produces vectorized fiber polylines in the epidermis and papillary dermis.

Intersection logic and counting rules. Fibers are counted if a polyline intersects the basement membrane once; supra‑epidermal branching is ignored; tangential contacts without traversal are excluded; duplicate detections are merged by spatial clustering.

Metric computation. For ROIs retained after artifact exclusion, linear basement membrane length L (mm) is divided into the number of unique crossings N to yield IENFD = N/L (fibers/mm). Secondary metrics include total fiber count, linear mm, per‑ROI variability, and a confidence score from detection posteriors.

Quality control. Automatic flags include low contrast, boundary ambiguity, tissue fold/tear, staining dropout, and high density variance across ROIs. Flags are surfaced in the viewer and report.

3.3 Expert‑in‑the‑Loop Editing and Diffs

Editors can add/remove fibers, adjust the boundary, draw exclusion ROIs, and annotate comments. Edits are versioned; diffs quantify added/removed fibers, Δboundary length, and net ΔIENFD. Metrics update in near real time (≤ 500 ms debounce) to promote rapid what‑if review.

3.4 Reporting and Sign‑Off

Report drafts auto‑populate patient/specimen, methods (including model/method versions and pixel calibration), findings (per‑slide and aggregate metrics with reference‑range flags), snapshots with scale bars, and an editable interpretation. Signing requires re‑authentication and reason capture; a signature page with signer credentials, UTC timestamp, and SHA‑256 checksum is appended. Locked PDFs are immutable; any revision generates a new version.

3.5 Run‑Scoped Reporting and Longitudinal Organization

Each completed analysis run spawns a Run Report (vN) pinned to that run’s inputs, outputs, and method version. The Patient File Cabinet lists drafts and signed reports across runs, enabling cross‑run comparisons and preventing overwriting of prior, signed findings.

4. Validation Framework

4.1 Claims and Clinical Acceptance Criteria

Primary analytical claims for MVP‑1 are: (i) non‑inferiority of IENFD versus expert quantification after expert review (post‑edit delta target < 15% on average; stretch < 5%), and (ii) time‑efficiency (median review < 4 min/slide). Secondary claims cover precision/repeatability, viewer performance, and job success rates.

4.2 Study Designs and Statistics

Method comparison. Compare Nerve Trace‑assisted IENFD to reference counts by board‑certified pathologists under consensus rules. Use Deming or Passing‑Bablok regression for bias estimation; report slope/intercept with 95% CIs; produce Bland–Altman plots (bias, LOA). Pre‑specify clinically acceptable bias (e.g., ≤ 1.0 fibers/mm or ≤ 15%).

Precision. Evaluate repeatability (same slide/run/editor) and reproducibility (multi‑day, multi‑editor, multi‑instrument) via nested ANOVA; report CV% and SD components.

Agreement metrics. Compute two‑way random‑effects ICC(2,1) for absolute agreement across raters/methods.

Robustness/QC stress. Stratify by staining batch, scanner model, pixel size perturbation, and tissue artifacts; evaluate flag sensitivity/specificity.

Sample size. Power analyses for regression slope ≈ 1 and Bland–Altman LOA precision using historical variance; inflate for multi‑site heterogeneity.

4.3 Reference Ranges and Clinical Classification

Laboratories configure site‑ and age‑stratified reference ranges (e.g., distal leg deciles). Reports show measured IENFD alongside the lab’s reference interval and flag (below/within/above). Where labs adopt published distal‑leg datasets, the report footer cites the source and method (brightfield PGP9.5 vs confocal) to avoid cross‑modality drift.

4.4 Reader Study and Human‑Factors

A usability study measures task load (NASA‑TLX), error rates, and time‑to‑approve with/without overlays and with common edit tasks. Design changes are iterated until predefined usability thresholds are met. Human‑AI teaming guidance (transparency, confidence display, and override affordances) informs UI choices.

5. System Architecture and Data Provenance

5.1 Components
  • Frontend: Next.js application with high‑throughput tile rendering (OpenSeadragon/WebGL), overlay compositing, edit history, and keyboard shortcuts.

  • Backend: API routes for CRUD, upload orchestration (pre‑signed URLs), report rendering, and signature workflows.

  • Inference service: FastAPI GPU workers running the segmentation/detection pipeline with job orchestration and webhook/polling status.

  • Database/Storage: PostgreSQL (Supabase) with RLS; Storage buckets for slides, tiles, overlays, and reports.

5.2 Data Model and Traceability

All artifacts are linked: organization → patient → case → slide → analysis_job → overlays/metrics → report. Metrics store method_version, units, scope (slide vs case), QC flags, and (for run‑scoped metrics) the analysis_job_id. Reports store run_version, signer identity, timestamp, checksum, and paths to draft HTML/PDF.

5.3 Performance Targets

Chunked uploads support 5 GB WSIs with resumability; first paint ≤ 5 s to thumbnail/tiles, tile fetch p95 ≤ 200 ms; viewer ≥ 30 FPS on standard lab desktops; inference p50 < 5 min per slide (p95 < 10 min) with horizontal GPU scaling.

6. Security, Privacy, and Compliance

6.1 HIPAA‑Aligned Safeguards

Administrative, physical, and technical safeguards include: least‑privilege RBAC; org‑scoped RLS; encryption in transit (TLS 1.3) and at rest; signed URLs with short TTL; audit trails for all PHI access and edits; configurable retention and archival; IP allow‑listing; optional de‑identification mode and watermarking for previews. BAAs are executed with cloud vendors prior to PHI processing. NIST SP 800‑66 mapping guides risk analysis and control selection.

6.2 Electronic Signatures and Records Integrity

The e‑signature workflow captures signer identity (re‑authentication), intent (reason), and timestamps. The system appends a signature page and a SHA‑256 document hash to the PDF and locks the record; any subsequent change requires a new version. Integrity can be verified by recomputing the hash from the downloaded PDF.

6.3 Digital Pathology Validation Practices

Local clinical use requires site‑specific validation of WSI systems for intended use. Nerve Trace supports validation through deterministic rendering of overlays across viewers, explicit pixel calibration capture, and exportable audit trails and datasets for repeatability studies.

6.4 Regulatory Positioning (Informational)

Nerve Trace is positioned as decision‑support and measurement software operated under laboratory oversight. If marketed as a medical device, applicable AI/ML SaMD expectations (good machine learning practice, transparency of human‑AI teaming, and change‑control plans) would guide submissions.

7. Limitations and Risk Mitigations

  • Stain/Scanner variability. Cross‑site variability can shift detection. Mitigations: calibration sets, QC flags, method pinning by run, and site‑specific acceptance testing.

  • Edge cases. Ulceration, severe atrophy, or autolysis can confound boundary detection; ROIs and editor tools enable exclusion.

  • Reference range heterogeneity. Published intervals differ by method and cohort; the platform surfaces the source and enables local configuration.

  • User edits. Over‑correction can introduce bias; diffs and audit logs promote review, and training materials standardize counting.

8. Results Snapshot (from MVP Targets)

  • Post‑edit delta target < 15% average between AI baseline and final expert counts on pilot data.

  • Median review time < 4 min/slide with overlay assistance.

  • Viewer performance ≥ 30 FPS on commodity lab desktops.

  • Run‑scoped draft reports appear ≤ 5 s after job completion; signed reports are immutable and versioned.

9. Discussion

By formalizing counting rules in software, pinning every metric to run‑scoped method versions, and capturing full, immutable provenance, Nerve Trace operationalizes the rigor expected of quantitative pathology. The expert‑in‑the‑loop model preserves diagnostic authority while materially reducing time and variability. Beyond SFN, the same architecture (vector overlays, linear‑density metrics, human‑AI diffs, run‑scoped reports) generalizes to other semiquantitative tasks in dermatopathology and peripheral nerve disease research.

10. Future Work

  1. Multi‑site clinical validation with stratified analysis by stain/scanner and disease subgroup; 2) active‑learning loops for targeted model updates under predetermined change‑control; 3) HL7/LIS exports and case‑level longitudinal analytics; 4) richer uncertainty quantification and confidence visualization; 5) certificate‑based digital signatures for cryptographic non‑repudiation.

Acknowledgments

We thank collaborating pathologists and laboratory technologists for methodological feedback and pilot datasets.

References
  1. Lauria G, Hsieh S‑T, Johansson O, et al. EFNS/PNS guideline on the use of skin biopsy in the diagnosis of small fiber neuropathy. J Peripher Nerv Syst. 2010.

  2. Lauria G, et al. Intraepidermal nerve fiber density at the distal leg: worldwide normative values. Multicenter study (age‑stratified).

  3. Provitera V, et al. Quantitative and qualitative normative dataset for IENFD in distal leg. PLOS ONE. 2018.

  4. College of American Pathologists. Validating whole‑slide imaging for diagnostic purposes in pathology. Guideline update. 2021.

  5. FDA. 21 CFR Part 11 Guidance for Industry: Electronic Records; Electronic Signatures—Scope and Application. 2003.

  6. HHS OCR. HIPAA Security Rule—Administrative, Physical, and Technical Safeguards (with NIST SP 800‑66 mapping).

  7. FDA, Health Canada, MHRA. Good Machine Learning Practice for Medical Device Development: Guiding Principles. 2021 (with 2024 transparency extension).

  8. DICOM Supplement 145: Whole Slide Microscopic Image.

  9. OME‑NGFF for bioimaging (Zarr) specification.

Appendix 1: Measurement Definitions
  • IENFD (fibers/mm): N/L where N is the count of unique fibers traversing the dermal–epidermal junction within ROI(s); L is basement membrane length in millimeters after pixel calibration and geometric smoothing.

  • Confidence: Posterior‑derived summary calibrated against expert labels; displayed as a per‑slide scalar and optional per‑object heatmap.

  • QC Flags: Boolean or graded indicators for contrast, boundary confidence, artifact detection, and density variance.

Appendix 2: Validation Analyses (Statistical Details)
  • Deming/Passing–Bablok: Orthogonal regression with error in both axes; report slope/intercept and 95% CIs; bias at clinical decision points.

  • Bland–Altman: Mean bias and ±1.96 SD limits of agreement; optionally, regression of differences to test proportional bias.

  • Precision (nested ANOVA): Variance components for within‑run, between‑run, between‑day, and between‑editor; compute CV% and total SD; report as per CLSI EP05 designs.

  • Agreement: ICC(2,1) with 95% CI.

Appendix 3: Security Controls (Illustrative)
  • Administrative: HIPAA training, risk analysis, vendor BAAs, incident response plan, access recertification.

  • Technical: TLS 1.3, AES‑256 at rest, MFA (where enabled), IP allow‑listing, signed URLs with short TTL, structured audit logs, integrity hashes on PDFs.

  • Physical: Cloud provider data‑center controls; customer‑side workstation hardening and network segmentation.

Media Contact

press@nervetrace.com

Automating epidermal and intraepidermal nerve analysis.

Automating epidermal and intraepidermal nerve analysis.

Automating epidermal and intraepidermal nerve analysis.