Zero-recovery-time redundancy is one thing. Getting your clocks right across that redundant network is another. In substation automation and industrial control, time synchronization isn’t optional — protection functions, event logging, and sampled value streams all depend on it. This article explains how PTP works specifically in PRP and HSR environments, what the standard requires, and where the practical complexity lies.
Table of Contents
Why PTP Behaves Differently on Redundant Networks
Standard PTP (IEC 61588:2009) assumes a device has one port. A grandmaster sends Sync messages, transparent clocks correct the timestamps as frames pass through, and ordinary clocks slave to the best master they find.
Before getting into the redundant-network specifics, it’s worth being clear on the five PTP device types defined in IEC 61588:2009, since PRP and HSR use all of them:
Ordinary clock (OC) — a single-port device that either acts as grandmaster or synchronizes as a slave. In a domain it supports a single PTP state. Its local clock is either free-running (when master) or adjusted to follow its master (when slave).
Boundary clock (BC) — a multi-port device. Sync and Announce messages terminate at the BC and are not forwarded — the BC regenerates them on its other ports. It’s typically used as a network element, not associated with an application device. A BC with ports supporting both E2E and P2P mechanisms can bridge between regions using different delay mechanisms.
End-to-end transparent clock (E2E TC) — forwards all PTP messages like a normal bridge, but measures the residence time of event messages as they pass through and accumulates the correction in the correctionField. Does not terminate Sync or Announce messages.
Peer-to-peer transparent clock (P2P TC) — like an E2E TC but also measures the link delay on each port using Pdelay_Req/Pdelay_Resp messages. Corrects for both residence time and link delay. Pdelay messages are link-local and are not forwarded.
One important constraint: E2E and P2P mechanisms do not interwork on the same communication path. P2P TCs can only be used in topologies where each P2P port communicates with at most one other P2P port — they cannot be mixed with standard bridges or E2E TCs on the same path. This is why L2 P2P is a homogeneous choice: once you commit to it, all network elements in the path need to support it.
PRP and HSR break the single-port assumption. Devices have two active ports simultaneously — one on each LAN in PRP, one in each direction on the ring in HSR. A slave clock receives the same Sync message twice, from two different paths, with different accumulated delays. That’s not a failure — that’s by design. But it means the duplicate discard mechanism that works for normal traffic cannot be applied blindly to PTP frames.
The standard specifies how clocks attach to simultaneously active redundant paths in PRP and HSR. The key concept it introduces is the Doubly Attached Clock (DAC) — an ordinary clock with two redundant, paired ports, each executing the PTP protocol independently.
PTP Profiles
Two profiles are defined in the standard:
L2 P2P — Layer 2 with peer-to-peer delay measurement. Uses Ethernet transport. Peer delay measurement between adjacent nodes. This is the only PTP profile considered for HSR. All clocks shall support peer-to-peer delay measurement. For HSR environments, this is not a choice — it’s the profile.
L3 E2E — Layer 3 with end-to-end delay measurement. Uses UDP/IP transport. This applies to PRP networks, not HSR. It’s relevant when PRP nodes communicate over routed networks or where IP addressing is required.
In substation environments, L2 P2P is what you’ll work with on the HSR ring. It operates at Layer 2, avoids IP routing complexity, and peer-to-peer delay measurement gives accurate results because each node independently measures its link delays — corrections are applied locally at each hop, not accumulated end-to-end.
How the Best Master Clock Algorithm Works
The BMCA is what determines which clock becomes grandmaster — and what happens when it fails. Understanding it is essential for designing redundant PTP topologies.
The BMCA runs locally and independently on every ordinary and boundary clock in a domain. Clocks don’t negotiate — each one computes the state of its own ports based on Announce messages it receives and its own clock attributes. The algorithm runs continuously, so it readapts automatically as the network or clocks change.
What the BMCA compares
Each clock advertises itself via Announce messages. The BMCA compares clocks using an ordered set of attributes:
- priority1 — a user-configurable value, 0–255. Lower wins. This is the primary override: set priority1 lower on your GPS-disciplined grandmaster and it will always win unless it fails.
- clockClass — traceability of the time source. Class 6 means locked to a primary reference (e.g. GPS); Class 7 means holdover; Class 187 means time error exceeds 1 µs and the clock can slave to another.
- clockAccuracy — estimated accuracy of the clock when acting as grandmaster. Lower hex value = better accuracy.
- offsetScaledLogVariance — clock stability/jitter. Lower is better.
- priority2 — secondary user-configurable tiebreaker, 0–255.
- clockIdentity — the unique identifier, used as final tiebreaker.
The BMCA selects from clocks whose Announce messages have been received recently and are “qualified” — at least two Announce messages received within the last four announce intervals. This qualification window prevents spurious transitions caused by a rogue master appearing briefly.
What stepsRemoved does
Every Announce message carries a stepsRemoved field — the number of boundary clocks between the sending clock and the grandmaster. If this field reaches 255, the Announce message is discarded. This prevents timing loops. In practical substation networks with at most a handful of BCs in any path, this limit is never approached.
Failover timing
When an active grandmaster fails, slave clocks detect the loss through the announceReceiptTimeout — typically 3 announce intervals after the last Announce message. With a default announce interval of 1 second, that’s a 3-second detection window. For applications requiring faster failover, the announce interval can be reduced, but this increases network traffic.
This failover time is completely independent of the HSR or PRP network recovery time. Your ring can recover from a link break in zero time, but your clocks may still take seconds to re-elect a new grandmaster.
Timing Accuracy Requirements
The standard sets specific timing targets:
- Network time inaccuracy: Better than ±1 µs after crossing approximately 15 transparent clocks or 3 boundary clocks
- Grandmaster time inaccuracy: Less than 250 ns between its time reference signal and the produced synchronization messages
- Grandmaster holdover: Shall remain within ±250 ns for at least 5 seconds after losing its time reference signal (in steady state)
- Grandmaster clock class: Class 6 when locked to time reference; Class 7 during holdover; Class 187 when time error exceeds 1 µs
These targets apply to the full chain — grandmaster, transparent clocks, and boundary clocks together. The ±1 µs budget at the slave is the end result of all corrections along the path.
PTP in PRP Networks
The core problem
PRP frames arrive at a DANP on two ports with different delays — the two LANs are independent networks and frames take different paths. For normal traffic, the DANP discards the duplicate based on the RCT. For PTP, this doesn’t apply cleanly:
- 1-step Sync messages are modified in transit by transparent clocks (the correction field is updated). The two copies arriving at the DANP will have different correction fields.
- Follow_Up frames (generated by 2-step clocks) carry no RCT.
- Boundary clocks in the LAN generate their own Announce and Sync frames with no RCT appended.
The result: a DANP receives what look like two independent PTP messages from the same master — same Clock Identity, different delays. It treats each port independently and does not perform duplicate discard on PTP frames.
How a DANP handles PTP
A DANP operating as a Doubly Attached Clock:
- Runs the PTP state machine independently on each port
- Runs the Best Master Clock Algorithm (BMCA) independently on each port
- Calculates link delays per port independently
- Selects the port providing the best clock quality as the active synchronization source
- Can use Sync messages from both ports for synchronization, or from one port only
When both ports receive Sync from the same master with the same Clock Identity, the DANP applies hysteresis — it doesn’t switch synchronization sources on minor quality fluctuations, which would cause noisy behaviour.
The correctionField — what it carries
Every PTP event message (Sync, Delay_Req, Pdelay_Req, Pdelay_Resp) has a correctionField. This is where residence time and link delay corrections accumulate as the message traverses the network.
For a 1-step transparent clock: the TC measures how long the Sync message spent inside it (ingress timestamp to egress timestamp) and adds that residence time directly to the correctionField as the message leaves. The slave receives a single Sync with the full accumulated correction.
For a 2-step transparent clock: the TC forwards the Sync message unmodified, then sends a Follow_Up message carrying the correction. The slave must receive both before it can use the timestamp.
In P2P mode, each TC also measures the link delay on each port independently using Pdelay_Req/Pdelay_Resp exchanges. This link delay is subtracted from the correction so the slave sees only the path delay from grandmaster to itself, not the local link delays.
The slave then computes its clock offset as:
offset = t_sync_receive − (t_sync_send + correctionField)
Where t_sync_send is carried in the Sync message and correctionField contains all the accumulated residence time and link delay corrections along the path. If the correction is accurate, this offset is a direct measure of how far the slave’s clock is from the grandmaster.
In PRP, this mechanism breaks down cleanly because the two LANs give the slave two versions of the same Sync message with different correctionField values — reflecting the different paths taken. The slave treats each port independently and synchronizes to the port that provides the best clock quality.
A RedBox connecting SANs or external devices to a PRP network can operate in several clock roles:
DABC (Doubly Attached Boundary Clock): The RedBox implements a BC between its interlink port and the PRP LANs. Recommended for connecting a grandmaster to a PRP network — only one RedBox is actually sending Announce and Sync at a time; the other stands by in PASSIVE_MASTER state until it detects the first has failed.
DATC (Doubly Attached Transparent Clock): The RedBox implements a TC. Used when the clock source behind the RedBox needs to be transparent to the PRP network — correction fields are updated as frames pass through.
SLTC (Singly Linked Transparent Clock): The RedBox implements a TC on one side only. Used in specific mixed topologies.
The standard recommends connecting a grandmaster via BC (DABC), not TC — the BC-based connection provides better control over PASSIVE_MASTER failover behaviour.
PTP in HSR Networks
How HSR handles PTP differently
In HSR, the ring forwards every frame in both directions simultaneously. For normal frames, a DANH receives two copies and discards the duplicate based on the sequence number. PTP breaks this model for one reason: intermediate nodes modify the frames.
Transparent clocks in the ring update the correction field as each Sync message passes through. A frame going clockwise accumulates different corrections from a frame going counter-clockwise. By the time both copies reach the destination DANH, they are not identical — they carry different correction fields reflecting different path delays.
A DANH therefore does not receive the same PTP message from both ports. It receives two Sync messages from the same master, with the same Clock Identity but different delays. These are treated as two independent sources, not as duplicates.
The Hybrid Clock
Section A.5.2 of the standard is direct: a DANH shall implement a Hybrid Clock.
A Hybrid Clock is conceptually a combination of a transparent clock and an ordinary clock — its closest equivalent in IEC 61588:2009 is called a “Combined ordinary and peer-to-peer transparent clock.” What this means in practice:
- The DANH acts as a TC for frames passing through the ring — it updates residence time corrections as each Sync passes through
- The DANH acts as an OC for its own synchronization — it listens to the port providing the best clock quality and runs the BMCA
- PTP messages (except link-local Pdelay_Req and Pdelay_Resp) are sent with HSR tags and forwarded around the ring
- Pdelay_Req and Pdelay_Resp are link-local — they are sent untagged and are not forwarded
HSR nodes can operate as 1-step or 2-step clocks. In 1-step operation, each DANH computes and inserts the Sync correction inline as the frame passes through. In 2-step operation, a Follow_Up message carries the correction — the DANH must await the Follow_Up before using the Sync timestamps.
Redundant clocks in HSR
Unlike PRP, only one grandmaster can be active in an HSR ring at a time in PTP. There cannot be two simultaneously active master clocks in the ring for the same PTP domain. The grandmaster is selected among all grandmaster-capable clocks according to the BMCA.
If the active grandmaster fails, the backup grandmaster detects the loss — it ceases to receive Announce messages — and begins broadcasting its own Announce messages, taking over as grandmaster. This is standard PTP BMCA behaviour, not a specific HSR mechanism.
Clock terminology in HSR
Before going further, it’s worth pinning down the clock terms you’ll see in diagrams:
HC (Hybrid Clock) — the clock type implemented by every DANH on the ring. It’s a TC and an OC combined in a single device. The TC part corrects residence times for frames it forwards around the ring. The OC part synchronizes the node’s own clock to the best Sync message it receives, from whichever direction it arrives first.
GC (grandmaster clock) — not a separate device type. It’s a role. The GC is whichever DANH on the ring has been elected grandmaster by the BMCA. In practice it’s typically a DANH with a GPS input, but electrically it’s still a DANH implementing an HC — just the one that won the election.
BC (boundary clock) — appears at interlink ports, not on the ring itself. When two rings are coupled through QuadBoxes, or when a PRP network connects to an HSR ring through RedBoxes, the interlink ports implement BCs. The BC terminates PTP messages on one side and regenerates them on the other.
PTP in two HSR rings coupled by QuadBoxes
When you connect two HSR rings through a QuadBox pair, the clock topology adds a layer of complexity that directly affects your timing budget.
Each QuadBox is internally two RedBoxes with an interlink. The interlink ports implement BCs — one BC facing Ring 1, one facing Ring 2. This means:
- Every DANH on Ring 1 runs an HC and synchronizes directly to the grandmaster on Ring 1
- The QuadBox BC on the Ring 1 side acts as a slave to that grandmaster
- The QuadBox BC on the Ring 2 side acts as master toward Ring 2, regenerating Announce and Sync messages
- Every DANH on Ring 2 runs an HC and synchronizes to the QuadBox BC — not to the Ring 1 grandmaster directly
The grandmaster’s time doesn’t flow transparently across the ring boundary. It stops at the BC, which then acts as a new clock source for Ring 2. This is intentional — the BC isolates the two rings and handles the fact that PTP messages arrive from two directions at the ring boundary.
The timing budget implication is significant. The standard targets ±1 µs network time inaccuracy across approximately 3 boundary clocks. Two coupled rings with QuadBox BCs already consumes one BC hop just at the ring junction. Add a BC at the grandmaster’s RedBox connection and another BC if the slave is behind a RedBox, and you’re at the limit of the chain before accounting for any TC hops on the rings themselves.
Design the clock topology before you design the network topology. Map out every BC in the chain from grandmaster to worst-case slave and count them. If you’re at 3 BCs and still have ring nodes to reach, you’re outside the target budget.
If the grandmaster is external to the ring (e.g. a GPS-disciplined clock connected through a RedBox), the RedBox implements a TC or BC between its interlink port and the ring. The standard notes this arrangement allows an existing DANH to inject synchronization into the ring, but does not recommend it when the MC is to be attached redundantly.
For redundant grandmaster attachment, the preferred approach is to have the grandmaster-capable device directly on the ring as a DANH.
PTP Across PRP-HSR Boundaries
When a PRP network connects to an HSR ring through a RedBox pair, the RedBoxes handle PTP differently depending on their configuration.
Connection via Boundary Clocks (recommended)
Each RedBox implements a BC. The two BCs are treated as redundant clocks on the HSR ring. Only one RedBox is actually sending Announce and Sync messages at a time — the other remains in PASSIVE_MASTER state. If the active RedBox stops, the passive one detects the loss and takes over.
The original PRP RCT is lost at the boundary — the HSR ring sees the BC’s clock, not the PRP grandmaster directly.
Connection via Transparent Clocks (not recommended)
Each RedBox implements a TC. This causes the injection of four PTP message paths into the ring, which the standard explicitly describes as not recommended and does not specify further.
Always use BCs, not TCs, when connecting PRP to HSR through RedBoxes.
Practical Checklist
Before commissioning:
- Confirm all DANHs implement a Hybrid Clock (mandatory)
- Confirm L2 P2P profile is configured (mandatory)
- Verify all network elements support P2P — E2E and P2P do not interwork on the same path
- Verify grandmaster time inaccuracy < 250 ns
- Check grandmaster holdover time ≥ 5 s
- Check priority1 and clockClass are correctly set on all grandmaster-capable clocks
- For PRP-to-HSR: RedBoxes configured as BCs, not TCs
- Verify only one grandmaster is active in each HSR ring domain
- Confirm Pdelay_Req/Pdelay_Resp are not being forwarded (link-local, untagged)
- Check for link delay asymmetry on fiber runs — if significant, configure delayAsymmetry correction
After commissioning:
- Verify network time inaccuracy < ±1 µs at worst-case slave
- Test grandmaster failover — verify slave holdover and BMCA takeover timing (expect 3× announce interval detection delay)
- Confirm clockClass transitions correctly during failover: Class 6 → Class 7 (holdover) → Class 187 (if error exceeds 1 µs)
- Test redundancy failover (link break) — verify clock sync recovers correctly
- Perform clock sync testing independently from topology failover testing
Conclusion
PTP works across PRP and HSR networks, but it requires more careful design than a standard Ethernet network. The dual-path architecture that gives you zero recovery time also means PTP frames arrive twice with different delays, intermediate nodes modify them, and the standard duplicate-discard mechanism doesn’t apply.
The key rules: DANHs must implement Hybrid Clocks, L2 P2P is mandatory, RedBoxes connecting PRP to HSR should use BC mode, and only one grandmaster can be active per HSR ring domain. Get those right and clock sync will survive both network faults and redundancy failovers cleanly.
If you’re designing for IEC 61850 applications requiring time-stamped events or sampled values, test clock sync as a separate commissioning step. A redundancy failover that recovers the network in zero time can still leave your clock sync disrupted for seconds if the PTP topology wasn’t designed correctly.
