Fix the CAS spinlock implementation
[project/bcm63xx/atf.git] / docs / design / firmware-design.rst
1 Firmware Design
2 ===============
3
4 Trusted Firmware-A (TF-A) implements a subset of the Trusted Board Boot
5 Requirements (TBBR) Platform Design Document (PDD) [1]_ for Arm reference
6 platforms. The TBB sequence starts when the platform is powered on and runs up
7 to the stage where it hands-off control to firmware running in the normal
8 world in DRAM. This is the cold boot path.
9
10 TF-A also implements the Power State Coordination Interface PDD [2]_ as a
11 runtime service. PSCI is the interface from normal world software to firmware
12 implementing power management use-cases (for example, secondary CPU boot,
13 hotplug and idle). Normal world software can access TF-A runtime services via
14 the Arm SMC (Secure Monitor Call) instruction. The SMC instruction must be
15 used as mandated by the SMC Calling Convention [3]_.
16
17 TF-A implements a framework for configuring and managing interrupts generated
18 in either security state. The details of the interrupt management framework
19 and its design can be found in TF-A Interrupt Management Design guide [4]_.
20
21 TF-A also implements a library for setting up and managing the translation
22 tables. The details of this library can be found in `Translation tables design`_.
23
24 TF-A can be built to support either AArch64 or AArch32 execution state.
25
26 Cold boot
27 ---------
28
29 The cold boot path starts when the platform is physically turned on. If
30 ``COLD_BOOT_SINGLE_CPU=0``, one of the CPUs released from reset is chosen as the
31 primary CPU, and the remaining CPUs are considered secondary CPUs. The primary
32 CPU is chosen through platform-specific means. The cold boot path is mainly
33 executed by the primary CPU, other than essential CPU initialization executed by
34 all CPUs. The secondary CPUs are kept in a safe platform-specific state until
35 the primary CPU has performed enough initialization to boot them.
36
37 Refer to the `Reset Design`_ for more information on the effect of the
38 ``COLD_BOOT_SINGLE_CPU`` platform build option.
39
40 The cold boot path in this implementation of TF-A depends on the execution
41 state. For AArch64, it is divided into five steps (in order of execution):
42
43 - Boot Loader stage 1 (BL1) *AP Trusted ROM*
44 - Boot Loader stage 2 (BL2) *Trusted Boot Firmware*
45 - Boot Loader stage 3-1 (BL31) *EL3 Runtime Software*
46 - Boot Loader stage 3-2 (BL32) *Secure-EL1 Payload* (optional)
47 - Boot Loader stage 3-3 (BL33) *Non-trusted Firmware*
48
49 For AArch32, it is divided into four steps (in order of execution):
50
51 - Boot Loader stage 1 (BL1) *AP Trusted ROM*
52 - Boot Loader stage 2 (BL2) *Trusted Boot Firmware*
53 - Boot Loader stage 3-2 (BL32) *EL3 Runtime Software*
54 - Boot Loader stage 3-3 (BL33) *Non-trusted Firmware*
55
56 Arm development platforms (Fixed Virtual Platforms (FVPs) and Juno) implement a
57 combination of the following types of memory regions. Each bootloader stage uses
58 one or more of these memory regions.
59
60 - Regions accessible from both non-secure and secure states. For example,
61 non-trusted SRAM, ROM and DRAM.
62 - Regions accessible from only the secure state. For example, trusted SRAM and
63 ROM. The FVPs also implement the trusted DRAM which is statically
64 configured. Additionally, the Base FVPs and Juno development platform
65 configure the TrustZone Controller (TZC) to create a region in the DRAM
66 which is accessible only from the secure state.
67
68 The sections below provide the following details:
69
70 - dynamic configuration of Boot Loader stages
71 - initialization and execution of the first three stages during cold boot
72 - specification of the EL3 Runtime Software (BL31 for AArch64 and BL32 for
73 AArch32) entrypoint requirements for use by alternative Trusted Boot
74 Firmware in place of the provided BL1 and BL2
75
76 Dynamic Configuration during cold boot
77 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
78
79 Each of the Boot Loader stages may be dynamically configured if required by the
80 platform. The Boot Loader stage may optionally specify a firmware
81 configuration file and/or hardware configuration file as listed below:
82
83 - HW_CONFIG - The hardware configuration file. Can be shared by all Boot Loader
84 stages and also by the Normal World Rich OS.
85 - TB_FW_CONFIG - Trusted Boot Firmware configuration file. Shared between BL1
86 and BL2.
87 - SOC_FW_CONFIG - SoC Firmware configuration file. Used by BL31.
88 - TOS_FW_CONFIG - Trusted OS Firmware configuration file. Used by Trusted OS
89 (BL32).
90 - NT_FW_CONFIG - Non Trusted Firmware configuration file. Used by Non-trusted
91 firmware (BL33).
92
93 The Arm development platforms use the Flattened Device Tree format for the
94 dynamic configuration files.
95
96 Each Boot Loader stage can pass up to 4 arguments via registers to the next
97 stage. BL2 passes the list of the next images to execute to the *EL3 Runtime
98 Software* (BL31 for AArch64 and BL32 for AArch32) via `arg0`. All the other
99 arguments are platform defined. The Arm development platforms use the following
100 convention:
101
102 - BL1 passes the address of a meminfo_t structure to BL2 via ``arg1``. This
103 structure contains the memory layout available to BL2.
104 - When dynamic configuration files are present, the firmware configuration for
105 the next Boot Loader stage is populated in the first available argument and
106 the generic hardware configuration is passed the next available argument.
107 For example,
108
109 - If TB_FW_CONFIG is loaded by BL1, then its address is passed in ``arg0``
110 to BL2.
111 - If HW_CONFIG is loaded by BL1, then its address is passed in ``arg2`` to
112 BL2. Note, ``arg1`` is already used for meminfo_t.
113 - If SOC_FW_CONFIG is loaded by BL2, then its address is passed in ``arg1``
114 to BL31. Note, ``arg0`` is used to pass the list of executable images.
115 - Similarly, if HW_CONFIG is loaded by BL1 or BL2, then its address is
116 passed in ``arg2`` to BL31.
117 - For other BL3x images, if the firmware configuration file is loaded by
118 BL2, then its address is passed in ``arg0`` and if HW_CONFIG is loaded
119 then its address is passed in ``arg1``.
120
121 BL1
122 ~~~
123
124 This stage begins execution from the platform's reset vector at EL3. The reset
125 address is platform dependent but it is usually located in a Trusted ROM area.
126 The BL1 data section is copied to trusted SRAM at runtime.
127
128 On the Arm development platforms, BL1 code starts execution from the reset
129 vector defined by the constant ``BL1_RO_BASE``. The BL1 data section is copied
130 to the top of trusted SRAM as defined by the constant ``BL1_RW_BASE``.
131
132 The functionality implemented by this stage is as follows.
133
134 Determination of boot path
135 ^^^^^^^^^^^^^^^^^^^^^^^^^^
136
137 Whenever a CPU is released from reset, BL1 needs to distinguish between a warm
138 boot and a cold boot. This is done using platform-specific mechanisms (see the
139 ``plat_get_my_entrypoint()`` function in the `Porting Guide`_). In the case of a
140 warm boot, a CPU is expected to continue execution from a separate
141 entrypoint. In the case of a cold boot, the secondary CPUs are placed in a safe
142 platform-specific state (see the ``plat_secondary_cold_boot_setup()`` function in
143 the `Porting Guide`_) while the primary CPU executes the remaining cold boot path
144 as described in the following sections.
145
146 This step only applies when ``PROGRAMMABLE_RESET_ADDRESS=0``. Refer to the
147 `Reset Design`_ for more information on the effect of the
148 ``PROGRAMMABLE_RESET_ADDRESS`` platform build option.
149
150 Architectural initialization
151 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
152
153 BL1 performs minimal architectural initialization as follows.
154
155 - Exception vectors
156
157 BL1 sets up simple exception vectors for both synchronous and asynchronous
158 exceptions. The default behavior upon receiving an exception is to populate
159 a status code in the general purpose register ``X0/R0`` and call the
160 ``plat_report_exception()`` function (see the `Porting Guide`_). The status
161 code is one of:
162
163 For AArch64:
164
165 ::
166
167 0x0 : Synchronous exception from Current EL with SP_EL0
168 0x1 : IRQ exception from Current EL with SP_EL0
169 0x2 : FIQ exception from Current EL with SP_EL0
170 0x3 : System Error exception from Current EL with SP_EL0
171 0x4 : Synchronous exception from Current EL with SP_ELx
172 0x5 : IRQ exception from Current EL with SP_ELx
173 0x6 : FIQ exception from Current EL with SP_ELx
174 0x7 : System Error exception from Current EL with SP_ELx
175 0x8 : Synchronous exception from Lower EL using aarch64
176 0x9 : IRQ exception from Lower EL using aarch64
177 0xa : FIQ exception from Lower EL using aarch64
178 0xb : System Error exception from Lower EL using aarch64
179 0xc : Synchronous exception from Lower EL using aarch32
180 0xd : IRQ exception from Lower EL using aarch32
181 0xe : FIQ exception from Lower EL using aarch32
182 0xf : System Error exception from Lower EL using aarch32
183
184 For AArch32:
185
186 ::
187
188 0x10 : User mode
189 0x11 : FIQ mode
190 0x12 : IRQ mode
191 0x13 : SVC mode
192 0x16 : Monitor mode
193 0x17 : Abort mode
194 0x1a : Hypervisor mode
195 0x1b : Undefined mode
196 0x1f : System mode
197
198 The ``plat_report_exception()`` implementation on the Arm FVP port programs
199 the Versatile Express System LED register in the following format to
200 indicate the occurrence of an unexpected exception:
201
202 ::
203
204 SYS_LED[0] - Security state (Secure=0/Non-Secure=1)
205 SYS_LED[2:1] - Exception Level (EL3=0x3, EL2=0x2, EL1=0x1, EL0=0x0)
206 For AArch32 it is always 0x0
207 SYS_LED[7:3] - Exception Class (Sync/Async & origin). This is the value
208 of the status code
209
210 A write to the LED register reflects in the System LEDs (S6LED0..7) in the
211 CLCD window of the FVP.
212
213 BL1 does not expect to receive any exceptions other than the SMC exception.
214 For the latter, BL1 installs a simple stub. The stub expects to receive a
215 limited set of SMC types (determined by their function IDs in the general
216 purpose register ``X0/R0``):
217
218 - ``BL1_SMC_RUN_IMAGE``: This SMC is raised by BL2 to make BL1 pass control
219 to EL3 Runtime Software.
220 - All SMCs listed in section "BL1 SMC Interface" in the `Firmware Update`_
221 Design Guide are supported for AArch64 only. These SMCs are currently
222 not supported when BL1 is built for AArch32.
223
224 Any other SMC leads to an assertion failure.
225
226 - CPU initialization
227
228 BL1 calls the ``reset_handler()`` function which in turn calls the CPU
229 specific reset handler function (see the section: "CPU specific operations
230 framework").
231
232 - Control register setup (for AArch64)
233
234 - ``SCTLR_EL3``. Instruction cache is enabled by setting the ``SCTLR_EL3.I``
235 bit. Alignment and stack alignment checking is enabled by setting the
236 ``SCTLR_EL3.A`` and ``SCTLR_EL3.SA`` bits. Exception endianness is set to
237 little-endian by clearing the ``SCTLR_EL3.EE`` bit.
238
239 - ``SCR_EL3``. The register width of the next lower exception level is set
240 to AArch64 by setting the ``SCR.RW`` bit. The ``SCR.EA`` bit is set to trap
241 both External Aborts and SError Interrupts in EL3. The ``SCR.SIF`` bit is
242 also set to disable instruction fetches from Non-secure memory when in
243 secure state.
244
245 - ``CPTR_EL3``. Accesses to the ``CPACR_EL1`` register from EL1 or EL2, or the
246 ``CPTR_EL2`` register from EL2 are configured to not trap to EL3 by
247 clearing the ``CPTR_EL3.TCPAC`` bit. Access to the trace functionality is
248 configured not to trap to EL3 by clearing the ``CPTR_EL3.TTA`` bit.
249 Instructions that access the registers associated with Floating Point
250 and Advanced SIMD execution are configured to not trap to EL3 by
251 clearing the ``CPTR_EL3.TFP`` bit.
252
253 - ``DAIF``. The SError interrupt is enabled by clearing the SError interrupt
254 mask bit.
255
256 - ``MDCR_EL3``. The trap controls, ``MDCR_EL3.TDOSA``, ``MDCR_EL3.TDA`` and
257 ``MDCR_EL3.TPM``, are set so that accesses to the registers they control
258 do not trap to EL3. AArch64 Secure self-hosted debug is disabled by
259 setting the ``MDCR_EL3.SDD`` bit. Also ``MDCR_EL3.SPD32`` is set to
260 disable AArch32 Secure self-hosted privileged debug from S-EL1.
261
262 - Control register setup (for AArch32)
263
264 - ``SCTLR``. Instruction cache is enabled by setting the ``SCTLR.I`` bit.
265 Alignment checking is enabled by setting the ``SCTLR.A`` bit.
266 Exception endianness is set to little-endian by clearing the
267 ``SCTLR.EE`` bit.
268
269 - ``SCR``. The ``SCR.SIF`` bit is set to disable instruction fetches from
270 Non-secure memory when in secure state.
271
272 - ``CPACR``. Allow execution of Advanced SIMD instructions at PL0 and PL1,
273 by clearing the ``CPACR.ASEDIS`` bit. Access to the trace functionality
274 is configured not to trap to undefined mode by clearing the
275 ``CPACR.TRCDIS`` bit.
276
277 - ``NSACR``. Enable non-secure access to Advanced SIMD functionality and
278 system register access to implemented trace registers.
279
280 - ``FPEXC``. Enable access to the Advanced SIMD and floating-point
281 functionality from all Exception levels.
282
283 - ``CPSR.A``. The Asynchronous data abort interrupt is enabled by clearing
284 the Asynchronous data abort interrupt mask bit.
285
286 - ``SDCR``. The ``SDCR.SPD`` field is set to disable AArch32 Secure
287 self-hosted privileged debug.
288
289 Platform initialization
290 ^^^^^^^^^^^^^^^^^^^^^^^
291
292 On Arm platforms, BL1 performs the following platform initializations:
293
294 - Enable the Trusted Watchdog.
295 - Initialize the console.
296 - Configure the Interconnect to enable hardware coherency.
297 - Enable the MMU and map the memory it needs to access.
298 - Configure any required platform storage to load the next bootloader image
299 (BL2).
300 - If the BL1 dynamic configuration file, ``TB_FW_CONFIG``, is available, then
301 load it to the platform defined address and make it available to BL2 via
302 ``arg0``.
303 - Configure the system timer and program the `CNTFRQ_EL0` for use by NS-BL1U
304 and NS-BL2U firmware update images.
305
306 Firmware Update detection and execution
307 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
308
309 After performing platform setup, BL1 common code calls
310 ``bl1_plat_get_next_image_id()`` to determine if `Firmware Update`_ is required or
311 to proceed with the normal boot process. If the platform code returns
312 ``BL2_IMAGE_ID`` then the normal boot sequence is executed as described in the
313 next section, else BL1 assumes that `Firmware Update`_ is required and execution
314 passes to the first image in the `Firmware Update`_ process. In either case, BL1
315 retrieves a descriptor of the next image by calling ``bl1_plat_get_image_desc()``.
316 The image descriptor contains an ``entry_point_info_t`` structure, which BL1
317 uses to initialize the execution state of the next image.
318
319 BL2 image load and execution
320 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
321
322 In the normal boot flow, BL1 execution continues as follows:
323
324 #. BL1 prints the following string from the primary CPU to indicate successful
325 execution of the BL1 stage:
326
327 ::
328
329 "Booting Trusted Firmware"
330
331 #. BL1 loads a BL2 raw binary image from platform storage, at a
332 platform-specific base address. Prior to the load, BL1 invokes
333 ``bl1_plat_handle_pre_image_load()`` which allows the platform to update or
334 use the image information. If the BL2 image file is not present or if
335 there is not enough free trusted SRAM the following error message is
336 printed:
337
338 ::
339
340 "Failed to load BL2 firmware."
341
342 #. BL1 invokes ``bl1_plat_handle_post_image_load()`` which again is intended
343 for platforms to take further action after image load. This function must
344 populate the necessary arguments for BL2, which may also include the memory
345 layout. Further description of the memory layout can be found later
346 in this document.
347
348 #. BL1 passes control to the BL2 image at Secure EL1 (for AArch64) or at
349 Secure SVC mode (for AArch32), starting from its load address.
350
351 BL2
352 ~~~
353
354 BL1 loads and passes control to BL2 at Secure-EL1 (for AArch64) or at Secure
355 SVC mode (for AArch32) . BL2 is linked against and loaded at a platform-specific
356 base address (more information can be found later in this document).
357 The functionality implemented by BL2 is as follows.
358
359 Architectural initialization
360 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
361
362 For AArch64, BL2 performs the minimal architectural initialization required
363 for subsequent stages of TF-A and normal world software. EL1 and EL0 are given
364 access to Floating Point and Advanced SIMD registers by clearing the
365 ``CPACR.FPEN`` bits.
366
367 For AArch32, the minimal architectural initialization required for subsequent
368 stages of TF-A and normal world software is taken care of in BL1 as both BL1
369 and BL2 execute at PL1.
370
371 Platform initialization
372 ^^^^^^^^^^^^^^^^^^^^^^^
373
374 On Arm platforms, BL2 performs the following platform initializations:
375
376 - Initialize the console.
377 - Configure any required platform storage to allow loading further bootloader
378 images.
379 - Enable the MMU and map the memory it needs to access.
380 - Perform platform security setup to allow access to controlled components.
381 - Reserve some memory for passing information to the next bootloader image
382 EL3 Runtime Software and populate it.
383 - Define the extents of memory available for loading each subsequent
384 bootloader image.
385 - If BL1 has passed TB_FW_CONFIG dynamic configuration file in ``arg0``,
386 then parse it.
387
388 Image loading in BL2
389 ^^^^^^^^^^^^^^^^^^^^
390
391 BL2 generic code loads the images based on the list of loadable images
392 provided by the platform. BL2 passes the list of executable images
393 provided by the platform to the next handover BL image.
394
395 The list of loadable images provided by the platform may also contain
396 dynamic configuration files. The files are loaded and can be parsed as
397 needed in the ``bl2_plat_handle_post_image_load()`` function. These
398 configuration files can be passed to next Boot Loader stages as arguments
399 by updating the corresponding entrypoint information in this function.
400
401 SCP_BL2 (System Control Processor Firmware) image load
402 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
403
404 Some systems have a separate System Control Processor (SCP) for power, clock,
405 reset and system control. BL2 loads the optional SCP_BL2 image from platform
406 storage into a platform-specific region of secure memory. The subsequent
407 handling of SCP_BL2 is platform specific. For example, on the Juno Arm
408 development platform port the image is transferred into SCP's internal memory
409 using the Boot Over MHU (BOM) protocol after being loaded in the trusted SRAM
410 memory. The SCP executes SCP_BL2 and signals to the Application Processor (AP)
411 for BL2 execution to continue.
412
413 EL3 Runtime Software image load
414 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
415
416 BL2 loads the EL3 Runtime Software image from platform storage into a platform-
417 specific address in trusted SRAM. If there is not enough memory to load the
418 image or image is missing it leads to an assertion failure.
419
420 AArch64 BL32 (Secure-EL1 Payload) image load
421 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
422
423 BL2 loads the optional BL32 image from platform storage into a platform-
424 specific region of secure memory. The image executes in the secure world. BL2
425 relies on BL31 to pass control to the BL32 image, if present. Hence, BL2
426 populates a platform-specific area of memory with the entrypoint/load-address
427 of the BL32 image. The value of the Saved Processor Status Register (``SPSR``)
428 for entry into BL32 is not determined by BL2, it is initialized by the
429 Secure-EL1 Payload Dispatcher (see later) within BL31, which is responsible for
430 managing interaction with BL32. This information is passed to BL31.
431
432 BL33 (Non-trusted Firmware) image load
433 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
434
435 BL2 loads the BL33 image (e.g. UEFI or other test or boot software) from
436 platform storage into non-secure memory as defined by the platform.
437
438 BL2 relies on EL3 Runtime Software to pass control to BL33 once secure state
439 initialization is complete. Hence, BL2 populates a platform-specific area of
440 memory with the entrypoint and Saved Program Status Register (``SPSR``) of the
441 normal world software image. The entrypoint is the load address of the BL33
442 image. The ``SPSR`` is determined as specified in Section 5.13 of the
443 `PSCI PDD`_. This information is passed to the EL3 Runtime Software.
444
445 AArch64 BL31 (EL3 Runtime Software) execution
446 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
447
448 BL2 execution continues as follows:
449
450 #. BL2 passes control back to BL1 by raising an SMC, providing BL1 with the
451 BL31 entrypoint. The exception is handled by the SMC exception handler
452 installed by BL1.
453
454 #. BL1 turns off the MMU and flushes the caches. It clears the
455 ``SCTLR_EL3.M/I/C`` bits, flushes the data cache to the point of coherency
456 and invalidates the TLBs.
457
458 #. BL1 passes control to BL31 at the specified entrypoint at EL3.
459
460 Running BL2 at EL3 execution level
461 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
462
463 Some platforms have a non-TF-A Boot ROM that expects the next boot stage
464 to execute at EL3. On these platforms, TF-A BL1 is a waste of memory
465 as its only purpose is to ensure TF-A BL2 is entered at S-EL1. To avoid
466 this waste, a special mode enables BL2 to execute at EL3, which allows
467 a non-TF-A Boot ROM to load and jump directly to BL2. This mode is selected
468 when the build flag BL2_AT_EL3 is enabled. The main differences in this
469 mode are:
470
471 #. BL2 includes the reset code and the mailbox mechanism to differentiate
472 cold boot and warm boot. It runs at EL3 doing the arch
473 initialization required for EL3.
474
475 #. BL2 does not receive the meminfo information from BL1 anymore. This
476 information can be passed by the Boot ROM or be internal to the
477 BL2 image.
478
479 #. Since BL2 executes at EL3, BL2 jumps directly to the next image,
480 instead of invoking the RUN_IMAGE SMC call.
481
482
483 We assume 3 different types of BootROM support on the platform:
484
485 #. The Boot ROM always jumps to the same address, for both cold
486 and warm boot. In this case, we will need to keep a resident part
487 of BL2 whose memory cannot be reclaimed by any other image. The
488 linker script defines the symbols __TEXT_RESIDENT_START__ and
489 __TEXT_RESIDENT_END__ that allows the platform to configure
490 correctly the memory map.
491 #. The platform has some mechanism to indicate the jump address to the
492 Boot ROM. Platform code can then program the jump address with
493 psci_warmboot_entrypoint during cold boot.
494 #. The platform has some mechanism to program the reset address using
495 the PROGRAMMABLE_RESET_ADDRESS feature. Platform code can then
496 program the reset address with psci_warmboot_entrypoint during
497 cold boot, bypassing the boot ROM for warm boot.
498
499 In the last 2 cases, no part of BL2 needs to remain resident at
500 runtime. In the first 2 cases, we expect the Boot ROM to be able to
501 differentiate between warm and cold boot, to avoid loading BL2 again
502 during warm boot.
503
504 This functionality can be tested with FVP loading the image directly
505 in memory and changing the address where the system jumps at reset.
506 For example:
507
508 -C cluster0.cpu0.RVBAR=0x4022000
509 --data cluster0.cpu0=bl2.bin@0x4022000
510
511 With this configuration, FVP is like a platform of the first case,
512 where the Boot ROM jumps always to the same address. For simplification,
513 BL32 is loaded in DRAM in this case, to avoid other images reclaiming
514 BL2 memory.
515
516
517 AArch64 BL31
518 ~~~~~~~~~~~~
519
520 The image for this stage is loaded by BL2 and BL1 passes control to BL31 at
521 EL3. BL31 executes solely in trusted SRAM. BL31 is linked against and
522 loaded at a platform-specific base address (more information can be found later
523 in this document). The functionality implemented by BL31 is as follows.
524
525 Architectural initialization
526 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^
527
528 Currently, BL31 performs a similar architectural initialization to BL1 as
529 far as system register settings are concerned. Since BL1 code resides in ROM,
530 architectural initialization in BL31 allows override of any previous
531 initialization done by BL1.
532
533 BL31 initializes the per-CPU data framework, which provides a cache of
534 frequently accessed per-CPU data optimised for fast, concurrent manipulation
535 on different CPUs. This buffer includes pointers to per-CPU contexts, crash
536 buffer, CPU reset and power down operations, PSCI data, platform data and so on.
537
538 It then replaces the exception vectors populated by BL1 with its own. BL31
539 exception vectors implement more elaborate support for handling SMCs since this
540 is the only mechanism to access the runtime services implemented by BL31 (PSCI
541 for example). BL31 checks each SMC for validity as specified by the
542 `SMC calling convention PDD`_ before passing control to the required SMC
543 handler routine.
544
545 BL31 programs the ``CNTFRQ_EL0`` register with the clock frequency of the system
546 counter, which is provided by the platform.
547
548 Platform initialization
549 ^^^^^^^^^^^^^^^^^^^^^^^
550
551 BL31 performs detailed platform initialization, which enables normal world
552 software to function correctly.
553
554 On Arm platforms, this consists of the following:
555
556 - Initialize the console.
557 - Configure the Interconnect to enable hardware coherency.
558 - Enable the MMU and map the memory it needs to access.
559 - Initialize the generic interrupt controller.
560 - Initialize the power controller device.
561 - Detect the system topology.
562
563 Runtime services initialization
564 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
565
566 BL31 is responsible for initializing the runtime services. One of them is PSCI.
567
568 As part of the PSCI initializations, BL31 detects the system topology. It also
569 initializes the data structures that implement the state machine used to track
570 the state of power domain nodes. The state can be one of ``OFF``, ``RUN`` or
571 ``RETENTION``. All secondary CPUs are initially in the ``OFF`` state. The cluster
572 that the primary CPU belongs to is ``ON``; any other cluster is ``OFF``. It also
573 initializes the locks that protect them. BL31 accesses the state of a CPU or
574 cluster immediately after reset and before the data cache is enabled in the
575 warm boot path. It is not currently possible to use 'exclusive' based spinlocks,
576 therefore BL31 uses locks based on Lamport's Bakery algorithm instead.
577
578 The runtime service framework and its initialization is described in more
579 detail in the "EL3 runtime services framework" section below.
580
581 Details about the status of the PSCI implementation are provided in the
582 "Power State Coordination Interface" section below.
583
584 AArch64 BL32 (Secure-EL1 Payload) image initialization
585 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
586
587 If a BL32 image is present then there must be a matching Secure-EL1 Payload
588 Dispatcher (SPD) service (see later for details). During initialization
589 that service must register a function to carry out initialization of BL32
590 once the runtime services are fully initialized. BL31 invokes such a
591 registered function to initialize BL32 before running BL33. This initialization
592 is not necessary for AArch32 SPs.
593
594 Details on BL32 initialization and the SPD's role are described in the
595 "Secure-EL1 Payloads and Dispatchers" section below.
596
597 BL33 (Non-trusted Firmware) execution
598 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
599
600 EL3 Runtime Software initializes the EL2 or EL1 processor context for normal-
601 world cold boot, ensuring that no secure state information finds its way into
602 the non-secure execution state. EL3 Runtime Software uses the entrypoint
603 information provided by BL2 to jump to the Non-trusted firmware image (BL33)
604 at the highest available Exception Level (EL2 if available, otherwise EL1).
605
606 Using alternative Trusted Boot Firmware in place of BL1 & BL2 (AArch64 only)
607 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
608
609 Some platforms have existing implementations of Trusted Boot Firmware that
610 would like to use TF-A BL31 for the EL3 Runtime Software. To enable this
611 firmware architecture it is important to provide a fully documented and stable
612 interface between the Trusted Boot Firmware and BL31.
613
614 Future changes to the BL31 interface will be done in a backwards compatible
615 way, and this enables these firmware components to be independently enhanced/
616 updated to develop and exploit new functionality.
617
618 Required CPU state when calling ``bl31_entrypoint()`` during cold boot
619 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
620
621 This function must only be called by the primary CPU.
622
623 On entry to this function the calling primary CPU must be executing in AArch64
624 EL3, little-endian data access, and all interrupt sources masked:
625
626 ::
627
628 PSTATE.EL = 3
629 PSTATE.RW = 1
630 PSTATE.DAIF = 0xf
631 SCTLR_EL3.EE = 0
632
633 X0 and X1 can be used to pass information from the Trusted Boot Firmware to the
634 platform code in BL31:
635
636 ::
637
638 X0 : Reserved for common TF-A information
639 X1 : Platform specific information
640
641 BL31 zero-init sections (e.g. ``.bss``) should not contain valid data on entry,
642 these will be zero filled prior to invoking platform setup code.
643
644 Use of the X0 and X1 parameters
645 '''''''''''''''''''''''''''''''
646
647 The parameters are platform specific and passed from ``bl31_entrypoint()`` to
648 ``bl31_early_platform_setup()``. The value of these parameters is never directly
649 used by the common BL31 code.
650
651 The convention is that ``X0`` conveys information regarding the BL31, BL32 and
652 BL33 images from the Trusted Boot firmware and ``X1`` can be used for other
653 platform specific purpose. This convention allows platforms which use TF-A's
654 BL1 and BL2 images to transfer additional platform specific information from
655 Secure Boot without conflicting with future evolution of TF-A using ``X0`` to
656 pass a ``bl31_params`` structure.
657
658 BL31 common and SPD initialization code depends on image and entrypoint
659 information about BL33 and BL32, which is provided via BL31 platform APIs.
660 This information is required until the start of execution of BL33. This
661 information can be provided in a platform defined manner, e.g. compiled into
662 the platform code in BL31, or provided in a platform defined memory location
663 by the Trusted Boot firmware, or passed from the Trusted Boot Firmware via the
664 Cold boot Initialization parameters. This data may need to be cleaned out of
665 the CPU caches if it is provided by an earlier boot stage and then accessed by
666 BL31 platform code before the caches are enabled.
667
668 TF-A's BL2 implementation passes a ``bl31_params`` structure in
669 ``X0`` and the Arm development platforms interpret this in the BL31 platform
670 code.
671
672 MMU, Data caches & Coherency
673 ''''''''''''''''''''''''''''
674
675 BL31 does not depend on the enabled state of the MMU, data caches or
676 interconnect coherency on entry to ``bl31_entrypoint()``. If these are disabled
677 on entry, these should be enabled during ``bl31_plat_arch_setup()``.
678
679 Data structures used in the BL31 cold boot interface
680 ''''''''''''''''''''''''''''''''''''''''''''''''''''
681
682 These structures are designed to support compatibility and independent
683 evolution of the structures and the firmware images. For example, a version of
684 BL31 that can interpret the BL3x image information from different versions of
685 BL2, a platform that uses an extended entry_point_info structure to convey
686 additional register information to BL31, or a ELF image loader that can convey
687 more details about the firmware images.
688
689 To support these scenarios the structures are versioned and sized, which enables
690 BL31 to detect which information is present and respond appropriately. The
691 ``param_header`` is defined to capture this information:
692
693 .. code:: c
694
695 typedef struct param_header {
696 uint8_t type; /* type of the structure */
697 uint8_t version; /* version of this structure */
698 uint16_t size; /* size of this structure in bytes */
699 uint32_t attr; /* attributes: unused bits SBZ */
700 } param_header_t;
701
702 The structures using this format are ``entry_point_info``, ``image_info`` and
703 ``bl31_params``. The code that allocates and populates these structures must set
704 the header fields appropriately, and the ``SET_PARAM_HEAD()`` a macro is defined
705 to simplify this action.
706
707 Required CPU state for BL31 Warm boot initialization
708 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
709
710 When requesting a CPU power-on, or suspending a running CPU, TF-A provides
711 the platform power management code with a Warm boot initialization
712 entry-point, to be invoked by the CPU immediately after the reset handler.
713 On entry to the Warm boot initialization function the calling CPU must be in
714 AArch64 EL3, little-endian data access and all interrupt sources masked:
715
716 ::
717
718 PSTATE.EL = 3
719 PSTATE.RW = 1
720 PSTATE.DAIF = 0xf
721 SCTLR_EL3.EE = 0
722
723 The PSCI implementation will initialize the processor state and ensure that the
724 platform power management code is then invoked as required to initialize all
725 necessary system, cluster and CPU resources.
726
727 AArch32 EL3 Runtime Software entrypoint interface
728 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
729
730 To enable this firmware architecture it is important to provide a fully
731 documented and stable interface between the Trusted Boot Firmware and the
732 AArch32 EL3 Runtime Software.
733
734 Future changes to the entrypoint interface will be done in a backwards
735 compatible way, and this enables these firmware components to be independently
736 enhanced/updated to develop and exploit new functionality.
737
738 Required CPU state when entering during cold boot
739 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
740
741 This function must only be called by the primary CPU.
742
743 On entry to this function the calling primary CPU must be executing in AArch32
744 EL3, little-endian data access, and all interrupt sources masked:
745
746 ::
747
748 PSTATE.AIF = 0x7
749 SCTLR.EE = 0
750
751 R0 and R1 are used to pass information from the Trusted Boot Firmware to the
752 platform code in AArch32 EL3 Runtime Software:
753
754 ::
755
756 R0 : Reserved for common TF-A information
757 R1 : Platform specific information
758
759 Use of the R0 and R1 parameters
760 '''''''''''''''''''''''''''''''
761
762 The parameters are platform specific and the convention is that ``R0`` conveys
763 information regarding the BL3x images from the Trusted Boot firmware and ``R1``
764 can be used for other platform specific purpose. This convention allows
765 platforms which use TF-A's BL1 and BL2 images to transfer additional platform
766 specific information from Secure Boot without conflicting with future
767 evolution of TF-A using ``R0`` to pass a ``bl_params`` structure.
768
769 The AArch32 EL3 Runtime Software is responsible for entry into BL33. This
770 information can be obtained in a platform defined manner, e.g. compiled into
771 the AArch32 EL3 Runtime Software, or provided in a platform defined memory
772 location by the Trusted Boot firmware, or passed from the Trusted Boot Firmware
773 via the Cold boot Initialization parameters. This data may need to be cleaned
774 out of the CPU caches if it is provided by an earlier boot stage and then
775 accessed by AArch32 EL3 Runtime Software before the caches are enabled.
776
777 When using AArch32 EL3 Runtime Software, the Arm development platforms pass a
778 ``bl_params`` structure in ``R0`` from BL2 to be interpreted by AArch32 EL3 Runtime
779 Software platform code.
780
781 MMU, Data caches & Coherency
782 ''''''''''''''''''''''''''''
783
784 AArch32 EL3 Runtime Software must not depend on the enabled state of the MMU,
785 data caches or interconnect coherency in its entrypoint. They must be explicitly
786 enabled if required.
787
788 Data structures used in cold boot interface
789 '''''''''''''''''''''''''''''''''''''''''''
790
791 The AArch32 EL3 Runtime Software cold boot interface uses ``bl_params`` instead
792 of ``bl31_params``. The ``bl_params`` structure is based on the convention
793 described in AArch64 BL31 cold boot interface section.
794
795 Required CPU state for warm boot initialization
796 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
797
798 When requesting a CPU power-on, or suspending a running CPU, AArch32 EL3
799 Runtime Software must ensure execution of a warm boot initialization entrypoint.
800 If TF-A BL1 is used and the PROGRAMMABLE_RESET_ADDRESS build flag is false,
801 then AArch32 EL3 Runtime Software must ensure that BL1 branches to the warm
802 boot entrypoint by arranging for the BL1 platform function,
803 plat_get_my_entrypoint(), to return a non-zero value.
804
805 In this case, the warm boot entrypoint must be in AArch32 EL3, little-endian
806 data access and all interrupt sources masked:
807
808 ::
809
810 PSTATE.AIF = 0x7
811 SCTLR.EE = 0
812
813 The warm boot entrypoint may be implemented by using TF-A
814 ``psci_warmboot_entrypoint()`` function. In that case, the platform must fulfil
815 the pre-requisites mentioned in the `PSCI Library integration guide`_.
816
817 EL3 runtime services framework
818 ------------------------------
819
820 Software executing in the non-secure state and in the secure state at exception
821 levels lower than EL3 will request runtime services using the Secure Monitor
822 Call (SMC) instruction. These requests will follow the convention described in
823 the SMC Calling Convention PDD (`SMCCC`_). The `SMCCC`_ assigns function
824 identifiers to each SMC request and describes how arguments are passed and
825 returned.
826
827 The EL3 runtime services framework enables the development of services by
828 different providers that can be easily integrated into final product firmware.
829 The following sections describe the framework which facilitates the
830 registration, initialization and use of runtime services in EL3 Runtime
831 Software (BL31).
832
833 The design of the runtime services depends heavily on the concepts and
834 definitions described in the `SMCCC`_, in particular SMC Function IDs, Owning
835 Entity Numbers (OEN), Fast and Yielding calls, and the SMC32 and SMC64 calling
836 conventions. Please refer to that document for more detailed explanation of
837 these terms.
838
839 The following runtime services are expected to be implemented first. They have
840 not all been instantiated in the current implementation.
841
842 #. Standard service calls
843
844 This service is for management of the entire system. The Power State
845 Coordination Interface (`PSCI`_) is the first set of standard service calls
846 defined by Arm (see PSCI section later).
847
848 #. Secure-EL1 Payload Dispatcher service
849
850 If a system runs a Trusted OS or other Secure-EL1 Payload (SP) then
851 it also requires a *Secure Monitor* at EL3 to switch the EL1 processor
852 context between the normal world (EL1/EL2) and trusted world (Secure-EL1).
853 The Secure Monitor will make these world switches in response to SMCs. The
854 `SMCCC`_ provides for such SMCs with the Trusted OS Call and Trusted
855 Application Call OEN ranges.
856
857 The interface between the EL3 Runtime Software and the Secure-EL1 Payload is
858 not defined by the `SMCCC`_ or any other standard. As a result, each
859 Secure-EL1 Payload requires a specific Secure Monitor that runs as a runtime
860 service - within TF-A this service is referred to as the Secure-EL1 Payload
861 Dispatcher (SPD).
862
863 TF-A provides a Test Secure-EL1 Payload (TSP) and its associated Dispatcher
864 (TSPD). Details of SPD design and TSP/TSPD operation are described in the
865 "Secure-EL1 Payloads and Dispatchers" section below.
866
867 #. CPU implementation service
868
869 This service will provide an interface to CPU implementation specific
870 services for a given platform e.g. access to processor errata workarounds.
871 This service is currently unimplemented.
872
873 Additional services for Arm Architecture, SiP and OEM calls can be implemented.
874 Each implemented service handles a range of SMC function identifiers as
875 described in the `SMCCC`_.
876
877 Registration
878 ~~~~~~~~~~~~
879
880 A runtime service is registered using the ``DECLARE_RT_SVC()`` macro, specifying
881 the name of the service, the range of OENs covered, the type of service and
882 initialization and call handler functions. This macro instantiates a ``const struct rt_svc_desc`` for the service with these details (see ``runtime_svc.h``).
883 This structure is allocated in a special ELF section ``rt_svc_descs``, enabling
884 the framework to find all service descriptors included into BL31.
885
886 The specific service for a SMC Function is selected based on the OEN and call
887 type of the Function ID, and the framework uses that information in the service
888 descriptor to identify the handler for the SMC Call.
889
890 The service descriptors do not include information to identify the precise set
891 of SMC function identifiers supported by this service implementation, the
892 security state from which such calls are valid nor the capability to support
893 64-bit and/or 32-bit callers (using SMC32 or SMC64). Responding appropriately
894 to these aspects of a SMC call is the responsibility of the service
895 implementation, the framework is focused on integration of services from
896 different providers and minimizing the time taken by the framework before the
897 service handler is invoked.
898
899 Details of the parameters, requirements and behavior of the initialization and
900 call handling functions are provided in the following sections.
901
902 Initialization
903 ~~~~~~~~~~~~~~
904
905 ``runtime_svc_init()`` in ``runtime_svc.c`` initializes the runtime services
906 framework running on the primary CPU during cold boot as part of the BL31
907 initialization. This happens prior to initializing a Trusted OS and running
908 Normal world boot firmware that might in turn use these services.
909 Initialization involves validating each of the declared runtime service
910 descriptors, calling the service initialization function and populating the
911 index used for runtime lookup of the service.
912
913 The BL31 linker script collects all of the declared service descriptors into a
914 single array and defines symbols that allow the framework to locate and traverse
915 the array, and determine its size.
916
917 The framework does basic validation of each descriptor to halt firmware
918 initialization if service declaration errors are detected. The framework does
919 not check descriptors for the following error conditions, and may behave in an
920 unpredictable manner under such scenarios:
921
922 #. Overlapping OEN ranges
923 #. Multiple descriptors for the same range of OENs and ``call_type``
924 #. Incorrect range of owning entity numbers for a given ``call_type``
925
926 Once validated, the service ``init()`` callback is invoked. This function carries
927 out any essential EL3 initialization before servicing requests. The ``init()``
928 function is only invoked on the primary CPU during cold boot. If the service
929 uses per-CPU data this must either be initialized for all CPUs during this call,
930 or be done lazily when a CPU first issues an SMC call to that service. If
931 ``init()`` returns anything other than ``0``, this is treated as an initialization
932 error and the service is ignored: this does not cause the firmware to halt.
933
934 The OEN and call type fields present in the SMC Function ID cover a total of
935 128 distinct services, but in practice a single descriptor can cover a range of
936 OENs, e.g. SMCs to call a Trusted OS function. To optimize the lookup of a
937 service handler, the framework uses an array of 128 indices that map every
938 distinct OEN/call-type combination either to one of the declared services or to
939 indicate the service is not handled. This ``rt_svc_descs_indices[]`` array is
940 populated for all of the OENs covered by a service after the service ``init()``
941 function has reported success. So a service that fails to initialize will never
942 have it's ``handle()`` function invoked.
943
944 The following figure shows how the ``rt_svc_descs_indices[]`` index maps the SMC
945 Function ID call type and OEN onto a specific service handler in the
946 ``rt_svc_descs[]`` array.
947
948 |Image 1|
949
950 Handling an SMC
951 ~~~~~~~~~~~~~~~
952
953 When the EL3 runtime services framework receives a Secure Monitor Call, the SMC
954 Function ID is passed in W0 from the lower exception level (as per the
955 `SMCCC`_). If the calling register width is AArch32, it is invalid to invoke an
956 SMC Function which indicates the SMC64 calling convention: such calls are
957 ignored and return the Unknown SMC Function Identifier result code ``0xFFFFFFFF``
958 in R0/X0.
959
960 Bit[31] (fast/yielding call) and bits[29:24] (owning entity number) of the SMC
961 Function ID are combined to index into the ``rt_svc_descs_indices[]`` array. The
962 resulting value might indicate a service that has no handler, in this case the
963 framework will also report an Unknown SMC Function ID. Otherwise, the value is
964 used as a further index into the ``rt_svc_descs[]`` array to locate the required
965 service and handler.
966
967 The service's ``handle()`` callback is provided with five of the SMC parameters
968 directly, the others are saved into memory for retrieval (if needed) by the
969 handler. The handler is also provided with an opaque ``handle`` for use with the
970 supporting library for parameter retrieval, setting return values and context
971 manipulation; and with ``flags`` indicating the security state of the caller. The
972 framework finally sets up the execution stack for the handler, and invokes the
973 services ``handle()`` function.
974
975 On return from the handler the result registers are populated in X0-X3 before
976 restoring the stack and CPU state and returning from the original SMC.
977
978 Exception Handling Framework
979 ----------------------------
980
981 Please refer to the `Exception Handling Framework`_ document.
982
983 Power State Coordination Interface
984 ----------------------------------
985
986 TODO: Provide design walkthrough of PSCI implementation.
987
988 The PSCI v1.1 specification categorizes APIs as optional and mandatory. All the
989 mandatory APIs in PSCI v1.1, PSCI v1.0 and in PSCI v0.2 draft specification
990 `Power State Coordination Interface PDD`_ are implemented. The table lists
991 the PSCI v1.1 APIs and their support in generic code.
992
993 An API implementation might have a dependency on platform code e.g. CPU_SUSPEND
994 requires the platform to export a part of the implementation. Hence the level
995 of support of the mandatory APIs depends upon the support exported by the
996 platform port as well. The Juno and FVP (all variants) platforms export all the
997 required support.
998
999 +-----------------------------+-------------+-------------------------------+
1000 | PSCI v1.1 API | Supported | Comments |
1001 +=============================+=============+===============================+
1002 | ``PSCI_VERSION`` | Yes | The version returned is 1.1 |
1003 +-----------------------------+-------------+-------------------------------+
1004 | ``CPU_SUSPEND`` | Yes\* | |
1005 +-----------------------------+-------------+-------------------------------+
1006 | ``CPU_OFF`` | Yes\* | |
1007 +-----------------------------+-------------+-------------------------------+
1008 | ``CPU_ON`` | Yes\* | |
1009 +-----------------------------+-------------+-------------------------------+
1010 | ``AFFINITY_INFO`` | Yes | |
1011 +-----------------------------+-------------+-------------------------------+
1012 | ``MIGRATE`` | Yes\*\* | |
1013 +-----------------------------+-------------+-------------------------------+
1014 | ``MIGRATE_INFO_TYPE`` | Yes\*\* | |
1015 +-----------------------------+-------------+-------------------------------+
1016 | ``MIGRATE_INFO_CPU`` | Yes\*\* | |
1017 +-----------------------------+-------------+-------------------------------+
1018 | ``SYSTEM_OFF`` | Yes\* | |
1019 +-----------------------------+-------------+-------------------------------+
1020 | ``SYSTEM_RESET`` | Yes\* | |
1021 +-----------------------------+-------------+-------------------------------+
1022 | ``PSCI_FEATURES`` | Yes | |
1023 +-----------------------------+-------------+-------------------------------+
1024 | ``CPU_FREEZE`` | No | |
1025 +-----------------------------+-------------+-------------------------------+
1026 | ``CPU_DEFAULT_SUSPEND`` | No | |
1027 +-----------------------------+-------------+-------------------------------+
1028 | ``NODE_HW_STATE`` | Yes\* | |
1029 +-----------------------------+-------------+-------------------------------+
1030 | ``SYSTEM_SUSPEND`` | Yes\* | |
1031 +-----------------------------+-------------+-------------------------------+
1032 | ``PSCI_SET_SUSPEND_MODE`` | No | |
1033 +-----------------------------+-------------+-------------------------------+
1034 | ``PSCI_STAT_RESIDENCY`` | Yes\* | |
1035 +-----------------------------+-------------+-------------------------------+
1036 | ``PSCI_STAT_COUNT`` | Yes\* | |
1037 +-----------------------------+-------------+-------------------------------+
1038 | ``SYSTEM_RESET2`` | Yes\* | |
1039 +-----------------------------+-------------+-------------------------------+
1040 | ``MEM_PROTECT`` | Yes\* | |
1041 +-----------------------------+-------------+-------------------------------+
1042 | ``MEM_PROTECT_CHECK_RANGE`` | Yes\* | |
1043 +-----------------------------+-------------+-------------------------------+
1044
1045 \*Note : These PSCI APIs require platform power management hooks to be
1046 registered with the generic PSCI code to be supported.
1047
1048 \*\*Note : These PSCI APIs require appropriate Secure Payload Dispatcher
1049 hooks to be registered with the generic PSCI code to be supported.
1050
1051 The PSCI implementation in TF-A is a library which can be integrated with
1052 AArch64 or AArch32 EL3 Runtime Software for Armv8-A systems. A guide to
1053 integrating PSCI library with AArch32 EL3 Runtime Software can be found
1054 `here`_.
1055
1056 Secure-EL1 Payloads and Dispatchers
1057 -----------------------------------
1058
1059 On a production system that includes a Trusted OS running in Secure-EL1/EL0,
1060 the Trusted OS is coupled with a companion runtime service in the BL31
1061 firmware. This service is responsible for the initialisation of the Trusted
1062 OS and all communications with it. The Trusted OS is the BL32 stage of the
1063 boot flow in TF-A. The firmware will attempt to locate, load and execute a
1064 BL32 image.
1065
1066 TF-A uses a more general term for the BL32 software that runs at Secure-EL1 -
1067 the *Secure-EL1 Payload* - as it is not always a Trusted OS.
1068
1069 TF-A provides a Test Secure-EL1 Payload (TSP) and a Test Secure-EL1 Payload
1070 Dispatcher (TSPD) service as an example of how a Trusted OS is supported on a
1071 production system using the Runtime Services Framework. On such a system, the
1072 Test BL32 image and service are replaced by the Trusted OS and its dispatcher
1073 service. The TF-A build system expects that the dispatcher will define the
1074 build flag ``NEED_BL32`` to enable it to include the BL32 in the build either
1075 as a binary or to compile from source depending on whether the ``BL32`` build
1076 option is specified or not.
1077
1078 The TSP runs in Secure-EL1. It is designed to demonstrate synchronous
1079 communication with the normal-world software running in EL1/EL2. Communication
1080 is initiated by the normal-world software
1081
1082 - either directly through a Fast SMC (as defined in the `SMCCC`_)
1083
1084 - or indirectly through a `PSCI`_ SMC. The `PSCI`_ implementation in turn
1085 informs the TSPD about the requested power management operation. This allows
1086 the TSP to prepare for or respond to the power state change
1087
1088 The TSPD service is responsible for.
1089
1090 - Initializing the TSP
1091
1092 - Routing requests and responses between the secure and the non-secure
1093 states during the two types of communications just described
1094
1095 Initializing a BL32 Image
1096 ~~~~~~~~~~~~~~~~~~~~~~~~~
1097
1098 The Secure-EL1 Payload Dispatcher (SPD) service is responsible for initializing
1099 the BL32 image. It needs access to the information passed by BL2 to BL31 to do
1100 so. This is provided by:
1101
1102 .. code:: c
1103
1104 entry_point_info_t *bl31_plat_get_next_image_ep_info(uint32_t);
1105
1106 which returns a reference to the ``entry_point_info`` structure corresponding to
1107 the image which will be run in the specified security state. The SPD uses this
1108 API to get entry point information for the SECURE image, BL32.
1109
1110 In the absence of a BL32 image, BL31 passes control to the normal world
1111 bootloader image (BL33). When the BL32 image is present, it is typical
1112 that the SPD wants control to be passed to BL32 first and then later to BL33.
1113
1114 To do this the SPD has to register a BL32 initialization function during
1115 initialization of the SPD service. The BL32 initialization function has this
1116 prototype:
1117
1118 .. code:: c
1119
1120 int32_t init(void);
1121
1122 and is registered using the ``bl31_register_bl32_init()`` function.
1123
1124 TF-A supports two approaches for the SPD to pass control to BL32 before
1125 returning through EL3 and running the non-trusted firmware (BL33):
1126
1127 #. In the BL32 setup function, use ``bl31_set_next_image_type()`` to
1128 request that the exit from ``bl31_main()`` is to the BL32 entrypoint in
1129 Secure-EL1. BL31 will exit to BL32 using the asynchronous method by
1130 calling ``bl31_prepare_next_image_entry()`` and ``el3_exit()``.
1131
1132 When the BL32 has completed initialization at Secure-EL1, it returns to
1133 BL31 by issuing an SMC, using a Function ID allocated to the SPD. On
1134 receipt of this SMC, the SPD service handler should switch the CPU context
1135 from trusted to normal world and use the ``bl31_set_next_image_type()`` and
1136 ``bl31_prepare_next_image_entry()`` functions to set up the initial return to
1137 the normal world firmware BL33. On return from the handler the framework
1138 will exit to EL2 and run BL33.
1139
1140 #. The BL32 setup function registers an initialization function using
1141 ``bl31_register_bl32_init()`` which provides a SPD-defined mechanism to
1142 invoke a 'world-switch synchronous call' to Secure-EL1 to run the BL32
1143 entrypoint.
1144
1145 .. note::
1146 The Test SPD service included with TF-A provides one implementation
1147 of such a mechanism.
1148
1149 On completion BL32 returns control to BL31 via a SMC, and on receipt the
1150 SPD service handler invokes the synchronous call return mechanism to return
1151 to the BL32 initialization function. On return from this function,
1152 ``bl31_main()`` will set up the return to the normal world firmware BL33 and
1153 continue the boot process in the normal world.
1154
1155 Crash Reporting in BL31
1156 -----------------------
1157
1158 BL31 implements a scheme for reporting the processor state when an unhandled
1159 exception is encountered. The reporting mechanism attempts to preserve all the
1160 register contents and report it via a dedicated UART (PL011 console). BL31
1161 reports the general purpose, EL3, Secure EL1 and some EL2 state registers.
1162
1163 A dedicated per-CPU crash stack is maintained by BL31 and this is retrieved via
1164 the per-CPU pointer cache. The implementation attempts to minimise the memory
1165 required for this feature. The file ``crash_reporting.S`` contains the
1166 implementation for crash reporting.
1167
1168 The sample crash output is shown below.
1169
1170 ::
1171
1172 x0 :0x000000004F00007C
1173 x1 :0x0000000007FFFFFF
1174 x2 :0x0000000004014D50
1175 x3 :0x0000000000000000
1176 x4 :0x0000000088007998
1177 x5 :0x00000000001343AC
1178 x6 :0x0000000000000016
1179 x7 :0x00000000000B8A38
1180 x8 :0x00000000001343AC
1181 x9 :0x00000000000101A8
1182 x10 :0x0000000000000002
1183 x11 :0x000000000000011C
1184 x12 :0x00000000FEFDC644
1185 x13 :0x00000000FED93FFC
1186 x14 :0x0000000000247950
1187 x15 :0x00000000000007A2
1188 x16 :0x00000000000007A4
1189 x17 :0x0000000000247950
1190 x18 :0x0000000000000000
1191 x19 :0x00000000FFFFFFFF
1192 x20 :0x0000000004014D50
1193 x21 :0x000000000400A38C
1194 x22 :0x0000000000247950
1195 x23 :0x0000000000000010
1196 x24 :0x0000000000000024
1197 x25 :0x00000000FEFDC868
1198 x26 :0x00000000FEFDC86A
1199 x27 :0x00000000019EDEDC
1200 x28 :0x000000000A7CFDAA
1201 x29 :0x0000000004010780
1202 x30 :0x000000000400F004
1203 scr_el3 :0x0000000000000D3D
1204 sctlr_el3 :0x0000000000C8181F
1205 cptr_el3 :0x0000000000000000
1206 tcr_el3 :0x0000000080803520
1207 daif :0x00000000000003C0
1208 mair_el3 :0x00000000000004FF
1209 spsr_el3 :0x00000000800003CC
1210 elr_el3 :0x000000000400C0CC
1211 ttbr0_el3 :0x00000000040172A0
1212 esr_el3 :0x0000000096000210
1213 sp_el3 :0x0000000004014D50
1214 far_el3 :0x000000004F00007C
1215 spsr_el1 :0x0000000000000000
1216 elr_el1 :0x0000000000000000
1217 spsr_abt :0x0000000000000000
1218 spsr_und :0x0000000000000000
1219 spsr_irq :0x0000000000000000
1220 spsr_fiq :0x0000000000000000
1221 sctlr_el1 :0x0000000030C81807
1222 actlr_el1 :0x0000000000000000
1223 cpacr_el1 :0x0000000000300000
1224 csselr_el1 :0x0000000000000002
1225 sp_el1 :0x0000000004028800
1226 esr_el1 :0x0000000000000000
1227 ttbr0_el1 :0x000000000402C200
1228 ttbr1_el1 :0x0000000000000000
1229 mair_el1 :0x00000000000004FF
1230 amair_el1 :0x0000000000000000
1231 tcr_el1 :0x0000000000003520
1232 tpidr_el1 :0x0000000000000000
1233 tpidr_el0 :0x0000000000000000
1234 tpidrro_el0 :0x0000000000000000
1235 dacr32_el2 :0x0000000000000000
1236 ifsr32_el2 :0x0000000000000000
1237 par_el1 :0x0000000000000000
1238 far_el1 :0x0000000000000000
1239 afsr0_el1 :0x0000000000000000
1240 afsr1_el1 :0x0000000000000000
1241 contextidr_el1 :0x0000000000000000
1242 vbar_el1 :0x0000000004027000
1243 cntp_ctl_el0 :0x0000000000000000
1244 cntp_cval_el0 :0x0000000000000000
1245 cntv_ctl_el0 :0x0000000000000000
1246 cntv_cval_el0 :0x0000000000000000
1247 cntkctl_el1 :0x0000000000000000
1248 sp_el0 :0x0000000004010780
1249
1250 Guidelines for Reset Handlers
1251 -----------------------------
1252
1253 TF-A implements a framework that allows CPU and platform ports to perform
1254 actions very early after a CPU is released from reset in both the cold and warm
1255 boot paths. This is done by calling the ``reset_handler()`` function in both
1256 the BL1 and BL31 images. It in turn calls the platform and CPU specific reset
1257 handling functions.
1258
1259 Details for implementing a CPU specific reset handler can be found in
1260 Section 8. Details for implementing a platform specific reset handler can be
1261 found in the `Porting Guide`_ (see the ``plat_reset_handler()`` function).
1262
1263 When adding functionality to a reset handler, keep in mind that if a different
1264 reset handling behavior is required between the first and the subsequent
1265 invocations of the reset handling code, this should be detected at runtime.
1266 In other words, the reset handler should be able to detect whether an action has
1267 already been performed and act as appropriate. Possible courses of actions are,
1268 e.g. skip the action the second time, or undo/redo it.
1269
1270 Configuring secure interrupts
1271 -----------------------------
1272
1273 The GIC driver is responsible for performing initial configuration of secure
1274 interrupts on the platform. To this end, the platform is expected to provide the
1275 GIC driver (either GICv2 or GICv3, as selected by the platform) with the
1276 interrupt configuration during the driver initialisation.
1277
1278 Secure interrupt configuration are specified in an array of secure interrupt
1279 properties. In this scheme, in both GICv2 and GICv3 driver data structures, the
1280 ``interrupt_props`` member points to an array of interrupt properties. Each
1281 element of the array specifies the interrupt number and its attributes
1282 (priority, group, configuration). Each element of the array shall be populated
1283 by the macro ``INTR_PROP_DESC()``. The macro takes the following arguments:
1284
1285 - 10-bit interrupt number,
1286
1287 - 8-bit interrupt priority,
1288
1289 - Interrupt type (one of ``INTR_TYPE_EL3``, ``INTR_TYPE_S_EL1``,
1290 ``INTR_TYPE_NS``),
1291
1292 - Interrupt configuration (either ``GIC_INTR_CFG_LEVEL`` or
1293 ``GIC_INTR_CFG_EDGE``).
1294
1295 CPU specific operations framework
1296 ---------------------------------
1297
1298 Certain aspects of the Armv8-A architecture are implementation defined,
1299 that is, certain behaviours are not architecturally defined, but must be
1300 defined and documented by individual processor implementations. TF-A
1301 implements a framework which categorises the common implementation defined
1302 behaviours and allows a processor to export its implementation of that
1303 behaviour. The categories are:
1304
1305 #. Processor specific reset sequence.
1306
1307 #. Processor specific power down sequences.
1308
1309 #. Processor specific register dumping as a part of crash reporting.
1310
1311 #. Errata status reporting.
1312
1313 Each of the above categories fulfils a different requirement.
1314
1315 #. allows any processor specific initialization before the caches and MMU
1316 are turned on, like implementation of errata workarounds, entry into
1317 the intra-cluster coherency domain etc.
1318
1319 #. allows each processor to implement the power down sequence mandated in
1320 its Technical Reference Manual (TRM).
1321
1322 #. allows a processor to provide additional information to the developer
1323 in the event of a crash, for example Cortex-A53 has registers which
1324 can expose the data cache contents.
1325
1326 #. allows a processor to define a function that inspects and reports the status
1327 of all errata workarounds on that processor.
1328
1329 Please note that only 2. is mandated by the TRM.
1330
1331 The CPU specific operations framework scales to accommodate a large number of
1332 different CPUs during power down and reset handling. The platform can specify
1333 any CPU optimization it wants to enable for each CPU. It can also specify
1334 the CPU errata workarounds to be applied for each CPU type during reset
1335 handling by defining CPU errata compile time macros. Details on these macros
1336 can be found in `CPU specific build macros`_.
1337
1338 The CPU specific operations framework depends on the ``cpu_ops`` structure which
1339 needs to be exported for each type of CPU in the platform. It is defined in
1340 ``include/lib/cpus/aarch64/cpu_macros.S`` and has the following fields : ``midr``,
1341 ``reset_func()``, ``cpu_pwr_down_ops`` (array of power down functions) and
1342 ``cpu_reg_dump()``.
1343
1344 The CPU specific files in ``lib/cpus`` export a ``cpu_ops`` data structure with
1345 suitable handlers for that CPU. For example, ``lib/cpus/aarch64/cortex_a53.S``
1346 exports the ``cpu_ops`` for Cortex-A53 CPU. According to the platform
1347 configuration, these CPU specific files must be included in the build by
1348 the platform makefile. The generic CPU specific operations framework code exists
1349 in ``lib/cpus/aarch64/cpu_helpers.S``.
1350
1351 CPU specific Reset Handling
1352 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
1353
1354 After a reset, the state of the CPU when it calls generic reset handler is:
1355 MMU turned off, both instruction and data caches turned off and not part
1356 of any coherency domain.
1357
1358 The BL entrypoint code first invokes the ``plat_reset_handler()`` to allow
1359 the platform to perform any system initialization required and any system
1360 errata workarounds that needs to be applied. The ``get_cpu_ops_ptr()`` reads
1361 the current CPU midr, finds the matching ``cpu_ops`` entry in the ``cpu_ops``
1362 array and returns it. Note that only the part number and implementer fields
1363 in midr are used to find the matching ``cpu_ops`` entry. The ``reset_func()`` in
1364 the returned ``cpu_ops`` is then invoked which executes the required reset
1365 handling for that CPU and also any errata workarounds enabled by the platform.
1366 This function must preserve the values of general purpose registers x20 to x29.
1367
1368 Refer to Section "Guidelines for Reset Handlers" for general guidelines
1369 regarding placement of code in a reset handler.
1370
1371 CPU specific power down sequence
1372 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1373
1374 During the BL31 initialization sequence, the pointer to the matching ``cpu_ops``
1375 entry is stored in per-CPU data by ``init_cpu_ops()`` so that it can be quickly
1376 retrieved during power down sequences.
1377
1378 Various CPU drivers register handlers to perform power down at certain power
1379 levels for that specific CPU. The PSCI service, upon receiving a power down
1380 request, determines the highest power level at which to execute power down
1381 sequence for a particular CPU. It uses the ``prepare_cpu_pwr_dwn()`` function to
1382 pick the right power down handler for the requested level. The function
1383 retrieves ``cpu_ops`` pointer member of per-CPU data, and from that, further
1384 retrieves ``cpu_pwr_down_ops`` array, and indexes into the required level. If the
1385 requested power level is higher than what a CPU driver supports, the handler
1386 registered for highest level is invoked.
1387
1388 At runtime the platform hooks for power down are invoked by the PSCI service to
1389 perform platform specific operations during a power down sequence, for example
1390 turning off CCI coherency during a cluster power down.
1391
1392 CPU specific register reporting during crash
1393 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1394
1395 If the crash reporting is enabled in BL31, when a crash occurs, the crash
1396 reporting framework calls ``do_cpu_reg_dump`` which retrieves the matching
1397 ``cpu_ops`` using ``get_cpu_ops_ptr()`` function. The ``cpu_reg_dump()`` in
1398 ``cpu_ops`` is invoked, which then returns the CPU specific register values to
1399 be reported and a pointer to the ASCII list of register names in a format
1400 expected by the crash reporting framework.
1401
1402 CPU errata status reporting
1403 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
1404
1405 Errata workarounds for CPUs supported in TF-A are applied during both cold and
1406 warm boots, shortly after reset. Individual Errata workarounds are enabled as
1407 build options. Some errata workarounds have potential run-time implications;
1408 therefore some are enabled by default, others not. Platform ports shall
1409 override build options to enable or disable errata as appropriate. The CPU
1410 drivers take care of applying errata workarounds that are enabled and applicable
1411 to a given CPU. Refer to the section titled *CPU Errata Workarounds* in `CPUBM`_
1412 for more information.
1413
1414 Functions in CPU drivers that apply errata workaround must follow the
1415 conventions listed below.
1416
1417 The errata workaround must be authored as two separate functions:
1418
1419 - One that checks for errata. This function must determine whether that errata
1420 applies to the current CPU. Typically this involves matching the current
1421 CPUs revision and variant against a value that's known to be affected by the
1422 errata. If the function determines that the errata applies to this CPU, it
1423 must return ``ERRATA_APPLIES``; otherwise, it must return
1424 ``ERRATA_NOT_APPLIES``. The utility functions ``cpu_get_rev_var`` and
1425 ``cpu_rev_var_ls`` functions may come in handy for this purpose.
1426
1427 For an errata identified as ``E``, the check function must be named
1428 ``check_errata_E``.
1429
1430 This function will be invoked at different times, both from assembly and from
1431 C run time. Therefore it must follow AAPCS, and must not use stack.
1432
1433 - Another one that applies the errata workaround. This function would call the
1434 check function described above, and applies errata workaround if required.
1435
1436 CPU drivers that apply errata workaround can optionally implement an assembly
1437 function that report the status of errata workarounds pertaining to that CPU.
1438 For a driver that registers the CPU, for example, ``cpux`` via ``declare_cpu_ops``
1439 macro, the errata reporting function, if it exists, must be named
1440 ``cpux_errata_report``. This function will always be called with MMU enabled; it
1441 must follow AAPCS and may use stack.
1442
1443 In a debug build of TF-A, on a CPU that comes out of reset, both BL1 and the
1444 runtime firmware (BL31 in AArch64, and BL32 in AArch32) will invoke errata
1445 status reporting function, if one exists, for that type of CPU.
1446
1447 To report the status of each errata workaround, the function shall use the
1448 assembler macro ``report_errata``, passing it:
1449
1450 - The build option that enables the errata;
1451
1452 - The name of the CPU: this must be the same identifier that CPU driver
1453 registered itself with, using ``declare_cpu_ops``;
1454
1455 - And the errata identifier: the identifier must match what's used in the
1456 errata's check function described above.
1457
1458 The errata status reporting function will be called once per CPU type/errata
1459 combination during the software's active life time.
1460
1461 It's expected that whenever an errata workaround is submitted to TF-A, the
1462 errata reporting function is appropriately extended to report its status as
1463 well.
1464
1465 Reporting the status of errata workaround is for informational purpose only; it
1466 has no functional significance.
1467
1468 Memory layout of BL images
1469 --------------------------
1470
1471 Each bootloader image can be divided in 2 parts:
1472
1473 - the static contents of the image. These are data actually stored in the
1474 binary on the disk. In the ELF terminology, they are called ``PROGBITS``
1475 sections;
1476
1477 - the run-time contents of the image. These are data that don't occupy any
1478 space in the binary on the disk. The ELF binary just contains some
1479 metadata indicating where these data will be stored at run-time and the
1480 corresponding sections need to be allocated and initialized at run-time.
1481 In the ELF terminology, they are called ``NOBITS`` sections.
1482
1483 All PROGBITS sections are grouped together at the beginning of the image,
1484 followed by all NOBITS sections. This is true for all TF-A images and it is
1485 governed by the linker scripts. This ensures that the raw binary images are
1486 as small as possible. If a NOBITS section was inserted in between PROGBITS
1487 sections then the resulting binary file would contain zero bytes in place of
1488 this NOBITS section, making the image unnecessarily bigger. Smaller images
1489 allow faster loading from the FIP to the main memory.
1490
1491 Linker scripts and symbols
1492 ~~~~~~~~~~~~~~~~~~~~~~~~~~
1493
1494 Each bootloader stage image layout is described by its own linker script. The
1495 linker scripts export some symbols into the program symbol table. Their values
1496 correspond to particular addresses. TF-A code can refer to these symbols to
1497 figure out the image memory layout.
1498
1499 Linker symbols follow the following naming convention in TF-A.
1500
1501 - ``__<SECTION>_START__``
1502
1503 Start address of a given section named ``<SECTION>``.
1504
1505 - ``__<SECTION>_END__``
1506
1507 End address of a given section named ``<SECTION>``. If there is an alignment
1508 constraint on the section's end address then ``__<SECTION>_END__`` corresponds
1509 to the end address of the section's actual contents, rounded up to the right
1510 boundary. Refer to the value of ``__<SECTION>_UNALIGNED_END__`` to know the
1511 actual end address of the section's contents.
1512
1513 - ``__<SECTION>_UNALIGNED_END__``
1514
1515 End address of a given section named ``<SECTION>`` without any padding or
1516 rounding up due to some alignment constraint.
1517
1518 - ``__<SECTION>_SIZE__``
1519
1520 Size (in bytes) of a given section named ``<SECTION>``. If there is an
1521 alignment constraint on the section's end address then ``__<SECTION>_SIZE__``
1522 corresponds to the size of the section's actual contents, rounded up to the
1523 right boundary. In other words, ``__<SECTION>_SIZE__ = __<SECTION>_END__ - _<SECTION>_START__``. Refer to the value of ``__<SECTION>_UNALIGNED_SIZE__``
1524 to know the actual size of the section's contents.
1525
1526 - ``__<SECTION>_UNALIGNED_SIZE__``
1527
1528 Size (in bytes) of a given section named ``<SECTION>`` without any padding or
1529 rounding up due to some alignment constraint. In other words,
1530 ``__<SECTION>_UNALIGNED_SIZE__ = __<SECTION>_UNALIGNED_END__ - __<SECTION>_START__``.
1531
1532 Some of the linker symbols are mandatory as TF-A code relies on them to be
1533 defined. They are listed in the following subsections. Some of them must be
1534 provided for each bootloader stage and some are specific to a given bootloader
1535 stage.
1536
1537 The linker scripts define some extra, optional symbols. They are not actually
1538 used by any code but they help in understanding the bootloader images' memory
1539 layout as they are easy to spot in the link map files.
1540
1541 Common linker symbols
1542 ^^^^^^^^^^^^^^^^^^^^^
1543
1544 All BL images share the following requirements:
1545
1546 - The BSS section must be zero-initialised before executing any C code.
1547 - The coherent memory section (if enabled) must be zero-initialised as well.
1548 - The MMU setup code needs to know the extents of the coherent and read-only
1549 memory regions to set the right memory attributes. When
1550 ``SEPARATE_CODE_AND_RODATA=1``, it needs to know more specifically how the
1551 read-only memory region is divided between code and data.
1552
1553 The following linker symbols are defined for this purpose:
1554
1555 - ``__BSS_START__``
1556 - ``__BSS_SIZE__``
1557 - ``__COHERENT_RAM_START__`` Must be aligned on a page-size boundary.
1558 - ``__COHERENT_RAM_END__`` Must be aligned on a page-size boundary.
1559 - ``__COHERENT_RAM_UNALIGNED_SIZE__``
1560 - ``__RO_START__``
1561 - ``__RO_END__``
1562 - ``__TEXT_START__``
1563 - ``__TEXT_END__``
1564 - ``__RODATA_START__``
1565 - ``__RODATA_END__``
1566
1567 BL1's linker symbols
1568 ^^^^^^^^^^^^^^^^^^^^
1569
1570 BL1 being the ROM image, it has additional requirements. BL1 resides in ROM and
1571 it is entirely executed in place but it needs some read-write memory for its
1572 mutable data. Its ``.data`` section (i.e. its allocated read-write data) must be
1573 relocated from ROM to RAM before executing any C code.
1574
1575 The following additional linker symbols are defined for BL1:
1576
1577 - ``__BL1_ROM_END__`` End address of BL1's ROM contents, covering its code
1578 and ``.data`` section in ROM.
1579 - ``__DATA_ROM_START__`` Start address of the ``.data`` section in ROM. Must be
1580 aligned on a 16-byte boundary.
1581 - ``__DATA_RAM_START__`` Address in RAM where the ``.data`` section should be
1582 copied over. Must be aligned on a 16-byte boundary.
1583 - ``__DATA_SIZE__`` Size of the ``.data`` section (in ROM or RAM).
1584 - ``__BL1_RAM_START__`` Start address of BL1 read-write data.
1585 - ``__BL1_RAM_END__`` End address of BL1 read-write data.
1586
1587 How to choose the right base addresses for each bootloader stage image
1588 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1589
1590 There is currently no support for dynamic image loading in TF-A. This means
1591 that all bootloader images need to be linked against their ultimate runtime
1592 locations and the base addresses of each image must be chosen carefully such
1593 that images don't overlap each other in an undesired way. As the code grows,
1594 the base addresses might need adjustments to cope with the new memory layout.
1595
1596 The memory layout is completely specific to the platform and so there is no
1597 general recipe for choosing the right base addresses for each bootloader image.
1598 However, there are tools to aid in understanding the memory layout. These are
1599 the link map files: ``build/<platform>/<build-type>/bl<x>/bl<x>.map``, with ``<x>``
1600 being the stage bootloader. They provide a detailed view of the memory usage of
1601 each image. Among other useful information, they provide the end address of
1602 each image.
1603
1604 - ``bl1.map`` link map file provides ``__BL1_RAM_END__`` address.
1605 - ``bl2.map`` link map file provides ``__BL2_END__`` address.
1606 - ``bl31.map`` link map file provides ``__BL31_END__`` address.
1607 - ``bl32.map`` link map file provides ``__BL32_END__`` address.
1608
1609 For each bootloader image, the platform code must provide its start address
1610 as well as a limit address that it must not overstep. The latter is used in the
1611 linker scripts to check that the image doesn't grow past that address. If that
1612 happens, the linker will issue a message similar to the following:
1613
1614 ::
1615
1616 aarch64-none-elf-ld: BLx has exceeded its limit.
1617
1618 Additionally, if the platform memory layout implies some image overlaying like
1619 on FVP, BL31 and TSP need to know the limit address that their PROGBITS
1620 sections must not overstep. The platform code must provide those.
1621
1622 TF-A does not provide any mechanism to verify at boot time that the memory
1623 to load a new image is free to prevent overwriting a previously loaded image.
1624 The platform must specify the memory available in the system for all the
1625 relevant BL images to be loaded.
1626
1627 For example, in the case of BL1 loading BL2, ``bl1_plat_sec_mem_layout()`` will
1628 return the region defined by the platform where BL1 intends to load BL2. The
1629 ``load_image()`` function performs bounds check for the image size based on the
1630 base and maximum image size provided by the platforms. Platforms must take
1631 this behaviour into account when defining the base/size for each of the images.
1632
1633 Memory layout on Arm development platforms
1634 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
1635
1636 The following list describes the memory layout on the Arm development platforms:
1637
1638 - A 4KB page of shared memory is used for communication between Trusted
1639 Firmware and the platform's power controller. This is located at the base of
1640 Trusted SRAM. The amount of Trusted SRAM available to load the bootloader
1641 images is reduced by the size of the shared memory.
1642
1643 The shared memory is used to store the CPUs' entrypoint mailbox. On Juno,
1644 this is also used for the MHU payload when passing messages to and from the
1645 SCP.
1646
1647 - Another 4 KB page is reserved for passing memory layout between BL1 and BL2
1648 and also the dynamic firmware configurations.
1649
1650 - On FVP, BL1 is originally sitting in the Trusted ROM at address ``0x0``. On
1651 Juno, BL1 resides in flash memory at address ``0x0BEC0000``. BL1 read-write
1652 data are relocated to the top of Trusted SRAM at runtime.
1653
1654 - BL2 is loaded below BL1 RW
1655
1656 - EL3 Runtime Software, BL31 for AArch64 and BL32 for AArch32 (e.g. SP_MIN),
1657 is loaded at the top of the Trusted SRAM, such that its NOBITS sections will
1658 overwrite BL1 R/W data and BL2. This implies that BL1 global variables
1659 remain valid only until execution reaches the EL3 Runtime Software entry
1660 point during a cold boot.
1661
1662 - On Juno, SCP_BL2 is loaded temporarily into the EL3 Runtime Software memory
1663 region and transfered to the SCP before being overwritten by EL3 Runtime
1664 Software.
1665
1666 - BL32 (for AArch64) can be loaded in one of the following locations:
1667
1668 - Trusted SRAM
1669 - Trusted DRAM (FVP only)
1670 - Secure region of DRAM (top 16MB of DRAM configured by the TrustZone
1671 controller)
1672
1673 When BL32 (for AArch64) is loaded into Trusted SRAM, it is loaded below
1674 BL31.
1675
1676 The location of the BL32 image will result in different memory maps. This is
1677 illustrated for both FVP and Juno in the following diagrams, using the TSP as
1678 an example.
1679
1680 .. note::
1681 Loading the BL32 image in TZC secured DRAM doesn't change the memory
1682 layout of the other images in Trusted SRAM.
1683
1684 CONFIG section in memory layouts shown below contains:
1685
1686 ::
1687
1688 +--------------------+
1689 |bl2_mem_params_descs|
1690 |--------------------|
1691 | fw_configs |
1692 +--------------------+
1693
1694 ``bl2_mem_params_descs`` contains parameters passed from BL2 to next the
1695 BL image during boot.
1696
1697 ``fw_configs`` includes soc_fw_config, tos_fw_config and tb_fw_config.
1698
1699 **FVP with TSP in Trusted SRAM with firmware configs :**
1700 (These diagrams only cover the AArch64 case)
1701
1702 ::
1703
1704 DRAM
1705 0xffffffff +----------+
1706 : :
1707 |----------|
1708 |HW_CONFIG |
1709 0x83000000 |----------| (non-secure)
1710 | |
1711 0x80000000 +----------+
1712
1713 Trusted SRAM
1714 0x04040000 +----------+ loaded by BL2 +----------------+
1715 | BL1 (rw) | <<<<<<<<<<<<< | |
1716 |----------| <<<<<<<<<<<<< | BL31 NOBITS |
1717 | BL2 | <<<<<<<<<<<<< | |
1718 |----------| <<<<<<<<<<<<< |----------------|
1719 | | <<<<<<<<<<<<< | BL31 PROGBITS |
1720 | | <<<<<<<<<<<<< |----------------|
1721 | | <<<<<<<<<<<<< | BL32 |
1722 0x04002000 +----------+ +----------------+
1723 | CONFIG |
1724 0x04001000 +----------+
1725 | Shared |
1726 0x04000000 +----------+
1727
1728 Trusted ROM
1729 0x04000000 +----------+
1730 | BL1 (ro) |
1731 0x00000000 +----------+
1732
1733 **FVP with TSP in Trusted DRAM with firmware configs (default option):**
1734
1735 ::
1736
1737 DRAM
1738 0xffffffff +--------------+
1739 : :
1740 |--------------|
1741 | HW_CONFIG |
1742 0x83000000 |--------------| (non-secure)
1743 | |
1744 0x80000000 +--------------+
1745
1746 Trusted DRAM
1747 0x08000000 +--------------+
1748 | BL32 |
1749 0x06000000 +--------------+
1750
1751 Trusted SRAM
1752 0x04040000 +--------------+ loaded by BL2 +----------------+
1753 | BL1 (rw) | <<<<<<<<<<<<< | |
1754 |--------------| <<<<<<<<<<<<< | BL31 NOBITS |
1755 | BL2 | <<<<<<<<<<<<< | |
1756 |--------------| <<<<<<<<<<<<< |----------------|
1757 | | <<<<<<<<<<<<< | BL31 PROGBITS |
1758 | | +----------------+
1759 +--------------+
1760 | CONFIG |
1761 0x04001000 +--------------+
1762 | Shared |
1763 0x04000000 +--------------+
1764
1765 Trusted ROM
1766 0x04000000 +--------------+
1767 | BL1 (ro) |
1768 0x00000000 +--------------+
1769
1770 **FVP with TSP in TZC-Secured DRAM with firmware configs :**
1771
1772 ::
1773
1774 DRAM
1775 0xffffffff +----------+
1776 | BL32 | (secure)
1777 0xff000000 +----------+
1778 | |
1779 |----------|
1780 |HW_CONFIG |
1781 0x83000000 |----------| (non-secure)
1782 | |
1783 0x80000000 +----------+
1784
1785 Trusted SRAM
1786 0x04040000 +----------+ loaded by BL2 +----------------+
1787 | BL1 (rw) | <<<<<<<<<<<<< | |
1788 |----------| <<<<<<<<<<<<< | BL31 NOBITS |
1789 | BL2 | <<<<<<<<<<<<< | |
1790 |----------| <<<<<<<<<<<<< |----------------|
1791 | | <<<<<<<<<<<<< | BL31 PROGBITS |
1792 | | +----------------+
1793 0x04002000 +----------+
1794 | CONFIG |
1795 0x04001000 +----------+
1796 | Shared |
1797 0x04000000 +----------+
1798
1799 Trusted ROM
1800 0x04000000 +----------+
1801 | BL1 (ro) |
1802 0x00000000 +----------+
1803
1804 **Juno with BL32 in Trusted SRAM :**
1805
1806 ::
1807
1808 Flash0
1809 0x0C000000 +----------+
1810 : :
1811 0x0BED0000 |----------|
1812 | BL1 (ro) |
1813 0x0BEC0000 |----------|
1814 : :
1815 0x08000000 +----------+ BL31 is loaded
1816 after SCP_BL2 has
1817 Trusted SRAM been sent to SCP
1818 0x04040000 +----------+ loaded by BL2 +----------------+
1819 | BL1 (rw) | <<<<<<<<<<<<< | |
1820 |----------| <<<<<<<<<<<<< | BL31 NOBITS |
1821 | BL2 | <<<<<<<<<<<<< | |
1822 |----------| <<<<<<<<<<<<< |----------------|
1823 | SCP_BL2 | <<<<<<<<<<<<< | BL31 PROGBITS |
1824 |----------| <<<<<<<<<<<<< |----------------|
1825 | | <<<<<<<<<<<<< | BL32 |
1826 | | +----------------+
1827 | |
1828 0x04001000 +----------+
1829 | MHU |
1830 0x04000000 +----------+
1831
1832 **Juno with BL32 in TZC-secured DRAM :**
1833
1834 ::
1835
1836 DRAM
1837 0xFFE00000 +----------+
1838 | BL32 | (secure)
1839 0xFF000000 |----------|
1840 | |
1841 : : (non-secure)
1842 | |
1843 0x80000000 +----------+
1844
1845 Flash0
1846 0x0C000000 +----------+
1847 : :
1848 0x0BED0000 |----------|
1849 | BL1 (ro) |
1850 0x0BEC0000 |----------|
1851 : :
1852 0x08000000 +----------+ BL31 is loaded
1853 after SCP_BL2 has
1854 Trusted SRAM been sent to SCP
1855 0x04040000 +----------+ loaded by BL2 +----------------+
1856 | BL1 (rw) | <<<<<<<<<<<<< | |
1857 |----------| <<<<<<<<<<<<< | BL31 NOBITS |
1858 | BL2 | <<<<<<<<<<<<< | |
1859 |----------| <<<<<<<<<<<<< |----------------|
1860 | SCP_BL2 | <<<<<<<<<<<<< | BL31 PROGBITS |
1861 |----------| +----------------+
1862 0x04001000 +----------+
1863 | MHU |
1864 0x04000000 +----------+
1865
1866 Library at ROM
1867 ---------------
1868
1869 Please refer to the `ROMLIB Design`_ document.
1870
1871 Firmware Image Package (FIP)
1872 ----------------------------
1873
1874 Using a Firmware Image Package (FIP) allows for packing bootloader images (and
1875 potentially other payloads) into a single archive that can be loaded by TF-A
1876 from non-volatile platform storage. A driver to load images from a FIP has
1877 been added to the storage layer and allows a package to be read from supported
1878 platform storage. A tool to create Firmware Image Packages is also provided
1879 and described below.
1880
1881 Firmware Image Package layout
1882 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1883
1884 The FIP layout consists of a table of contents (ToC) followed by payload data.
1885 The ToC itself has a header followed by one or more table entries. The ToC is
1886 terminated by an end marker entry, and since the size of the ToC is 0 bytes,
1887 the offset equals the total size of the FIP file. All ToC entries describe some
1888 payload data that has been appended to the end of the binary package. With the
1889 information provided in the ToC entry the corresponding payload data can be
1890 retrieved.
1891
1892 ::
1893
1894 ------------------
1895 | ToC Header |
1896 |----------------|
1897 | ToC Entry 0 |
1898 |----------------|
1899 | ToC Entry 1 |
1900 |----------------|
1901 | ToC End Marker |
1902 |----------------|
1903 | |
1904 | Data 0 |
1905 | |
1906 |----------------|
1907 | |
1908 | Data 1 |
1909 | |
1910 ------------------
1911
1912 The ToC header and entry formats are described in the header file
1913 ``include/tools_share/firmware_image_package.h``. This file is used by both the
1914 tool and TF-A.
1915
1916 The ToC header has the following fields:
1917
1918 ::
1919
1920 `name`: The name of the ToC. This is currently used to validate the header.
1921 `serial_number`: A non-zero number provided by the creation tool
1922 `flags`: Flags associated with this data.
1923 Bits 0-31: Reserved
1924 Bits 32-47: Platform defined
1925 Bits 48-63: Reserved
1926
1927 A ToC entry has the following fields:
1928
1929 ::
1930
1931 `uuid`: All files are referred to by a pre-defined Universally Unique
1932 IDentifier [UUID] . The UUIDs are defined in
1933 `include/tools_share/firmware_image_package.h`. The platform translates
1934 the requested image name into the corresponding UUID when accessing the
1935 package.
1936 `offset_address`: The offset address at which the corresponding payload data
1937 can be found. The offset is calculated from the ToC base address.
1938 `size`: The size of the corresponding payload data in bytes.
1939 `flags`: Flags associated with this entry. None are yet defined.
1940
1941 Firmware Image Package creation tool
1942 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1943
1944 The FIP creation tool can be used to pack specified images into a binary
1945 package that can be loaded by TF-A from platform storage. The tool currently
1946 only supports packing bootloader images. Additional image definitions can be
1947 added to the tool as required.
1948
1949 The tool can be found in ``tools/fiptool``.
1950
1951 Loading from a Firmware Image Package (FIP)
1952 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1953
1954 The Firmware Image Package (FIP) driver can load images from a binary package on
1955 non-volatile platform storage. For the Arm development platforms, this is
1956 currently NOR FLASH.
1957
1958 Bootloader images are loaded according to the platform policy as specified by
1959 the function ``plat_get_image_source()``. For the Arm development platforms, this
1960 means the platform will attempt to load images from a Firmware Image Package
1961 located at the start of NOR FLASH0.
1962
1963 The Arm development platforms' policy is to only allow loading of a known set of
1964 images. The platform policy can be modified to allow additional images.
1965
1966 Use of coherent memory in TF-A
1967 ------------------------------
1968
1969 There might be loss of coherency when physical memory with mismatched
1970 shareability, cacheability and memory attributes is accessed by multiple CPUs
1971 (refer to section B2.9 of `Arm ARM`_ for more details). This possibility occurs
1972 in TF-A during power up/down sequences when coherency, MMU and caches are
1973 turned on/off incrementally.
1974
1975 TF-A defines coherent memory as a region of memory with Device nGnRE attributes
1976 in the translation tables. The translation granule size in TF-A is 4KB. This
1977 is the smallest possible size of the coherent memory region.
1978
1979 By default, all data structures which are susceptible to accesses with
1980 mismatched attributes from various CPUs are allocated in a coherent memory
1981 region (refer to section 2.1 of `Porting Guide`_). The coherent memory region
1982 accesses are Outer Shareable, non-cacheable and they can be accessed
1983 with the Device nGnRE attributes when the MMU is turned on. Hence, at the
1984 expense of at least an extra page of memory, TF-A is able to work around
1985 coherency issues due to mismatched memory attributes.
1986
1987 The alternative to the above approach is to allocate the susceptible data
1988 structures in Normal WriteBack WriteAllocate Inner shareable memory. This
1989 approach requires the data structures to be designed so that it is possible to
1990 work around the issue of mismatched memory attributes by performing software
1991 cache maintenance on them.
1992
1993 Disabling the use of coherent memory in TF-A
1994 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1995
1996 It might be desirable to avoid the cost of allocating coherent memory on
1997 platforms which are memory constrained. TF-A enables inclusion of coherent
1998 memory in firmware images through the build flag ``USE_COHERENT_MEM``.
1999 This flag is enabled by default. It can be disabled to choose the second
2000 approach described above.
2001
2002 The below sections analyze the data structures allocated in the coherent memory
2003 region and the changes required to allocate them in normal memory.
2004
2005 Coherent memory usage in PSCI implementation
2006 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2007
2008 The ``psci_non_cpu_pd_nodes`` data structure stores the platform's power domain
2009 tree information for state management of power domains. By default, this data
2010 structure is allocated in the coherent memory region in TF-A because it can be
2011 accessed by multiple CPUs, either with caches enabled or disabled.
2012
2013 .. code:: c
2014
2015 typedef struct non_cpu_pwr_domain_node {
2016 /*
2017 * Index of the first CPU power domain node level 0 which has this node
2018 * as its parent.
2019 */
2020 unsigned int cpu_start_idx;
2021
2022 /*
2023 * Number of CPU power domains which are siblings of the domain indexed
2024 * by 'cpu_start_idx' i.e. all the domains in the range 'cpu_start_idx
2025 * -> cpu_start_idx + ncpus' have this node as their parent.
2026 */
2027 unsigned int ncpus;
2028
2029 /*
2030 * Index of the parent power domain node.
2031 */
2032 unsigned int parent_node;
2033
2034 plat_local_state_t local_state;
2035
2036 unsigned char level;
2037
2038 /* For indexing the psci_lock array*/
2039 unsigned char lock_index;
2040 } non_cpu_pd_node_t;
2041
2042 In order to move this data structure to normal memory, the use of each of its
2043 fields must be analyzed. Fields like ``cpu_start_idx``, ``ncpus``, ``parent_node``
2044 ``level`` and ``lock_index`` are only written once during cold boot. Hence removing
2045 them from coherent memory involves only doing a clean and invalidate of the
2046 cache lines after these fields are written.
2047
2048 The field ``local_state`` can be concurrently accessed by multiple CPUs in
2049 different cache states. A Lamport's Bakery lock ``psci_locks`` is used to ensure
2050 mutual exclusion to this field and a clean and invalidate is needed after it
2051 is written.
2052
2053 Bakery lock data
2054 ~~~~~~~~~~~~~~~~
2055
2056 The bakery lock data structure ``bakery_lock_t`` is allocated in coherent memory
2057 and is accessed by multiple CPUs with mismatched attributes. ``bakery_lock_t`` is
2058 defined as follows:
2059
2060 .. code:: c
2061
2062 typedef struct bakery_lock {
2063 /*
2064 * The lock_data is a bit-field of 2 members:
2065 * Bit[0] : choosing. This field is set when the CPU is
2066 * choosing its bakery number.
2067 * Bits[1 - 15] : number. This is the bakery number allocated.
2068 */
2069 volatile uint16_t lock_data[BAKERY_LOCK_MAX_CPUS];
2070 } bakery_lock_t;
2071
2072 It is a characteristic of Lamport's Bakery algorithm that the volatile per-CPU
2073 fields can be read by all CPUs but only written to by the owning CPU.
2074
2075 Depending upon the data cache line size, the per-CPU fields of the
2076 ``bakery_lock_t`` structure for multiple CPUs may exist on a single cache line.
2077 These per-CPU fields can be read and written during lock contention by multiple
2078 CPUs with mismatched memory attributes. Since these fields are a part of the
2079 lock implementation, they do not have access to any other locking primitive to
2080 safeguard against the resulting coherency issues. As a result, simple software
2081 cache maintenance is not enough to allocate them in coherent memory. Consider
2082 the following example.
2083
2084 CPU0 updates its per-CPU field with data cache enabled. This write updates a
2085 local cache line which contains a copy of the fields for other CPUs as well. Now
2086 CPU1 updates its per-CPU field of the ``bakery_lock_t`` structure with data cache
2087 disabled. CPU1 then issues a DCIVAC operation to invalidate any stale copies of
2088 its field in any other cache line in the system. This operation will invalidate
2089 the update made by CPU0 as well.
2090
2091 To use bakery locks when ``USE_COHERENT_MEM`` is disabled, the lock data structure
2092 has been redesigned. The changes utilise the characteristic of Lamport's Bakery
2093 algorithm mentioned earlier. The bakery_lock structure only allocates the memory
2094 for a single CPU. The macro ``DEFINE_BAKERY_LOCK`` allocates all the bakery locks
2095 needed for a CPU into a section ``bakery_lock``. The linker allocates the memory
2096 for other cores by using the total size allocated for the bakery_lock section
2097 and multiplying it with (PLATFORM_CORE_COUNT - 1). This enables software to
2098 perform software cache maintenance on the lock data structure without running
2099 into coherency issues associated with mismatched attributes.
2100
2101 The bakery lock data structure ``bakery_info_t`` is defined for use when
2102 ``USE_COHERENT_MEM`` is disabled as follows:
2103
2104 .. code:: c
2105
2106 typedef struct bakery_info {
2107 /*
2108 * The lock_data is a bit-field of 2 members:
2109 * Bit[0] : choosing. This field is set when the CPU is
2110 * choosing its bakery number.
2111 * Bits[1 - 15] : number. This is the bakery number allocated.
2112 */
2113 volatile uint16_t lock_data;
2114 } bakery_info_t;
2115
2116 The ``bakery_info_t`` represents a single per-CPU field of one lock and
2117 the combination of corresponding ``bakery_info_t`` structures for all CPUs in the
2118 system represents the complete bakery lock. The view in memory for a system
2119 with n bakery locks are:
2120
2121 ::
2122
2123 bakery_lock section start
2124 |----------------|
2125 | `bakery_info_t`| <-- Lock_0 per-CPU field
2126 | Lock_0 | for CPU0
2127 |----------------|
2128 | `bakery_info_t`| <-- Lock_1 per-CPU field
2129 | Lock_1 | for CPU0
2130 |----------------|
2131 | .... |
2132 |----------------|
2133 | `bakery_info_t`| <-- Lock_N per-CPU field
2134 | Lock_N | for CPU0
2135 ------------------
2136 | XXXXX |
2137 | Padding to |
2138 | next Cache WB | <--- Calculate PERCPU_BAKERY_LOCK_SIZE, allocate
2139 | Granule | continuous memory for remaining CPUs.
2140 ------------------
2141 | `bakery_info_t`| <-- Lock_0 per-CPU field
2142 | Lock_0 | for CPU1
2143 |----------------|
2144 | `bakery_info_t`| <-- Lock_1 per-CPU field
2145 | Lock_1 | for CPU1
2146 |----------------|
2147 | .... |
2148 |----------------|
2149 | `bakery_info_t`| <-- Lock_N per-CPU field
2150 | Lock_N | for CPU1
2151 ------------------
2152 | XXXXX |
2153 | Padding to |
2154 | next Cache WB |
2155 | Granule |
2156 ------------------
2157
2158 Consider a system of 2 CPUs with 'N' bakery locks as shown above. For an
2159 operation on Lock_N, the corresponding ``bakery_info_t`` in both CPU0 and CPU1
2160 ``bakery_lock`` section need to be fetched and appropriate cache operations need
2161 to be performed for each access.
2162
2163 On Arm Platforms, bakery locks are used in psci (``psci_locks``) and power controller
2164 driver (``arm_lock``).
2165
2166 Non Functional Impact of removing coherent memory
2167 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2168
2169 Removal of the coherent memory region leads to the additional software overhead
2170 of performing cache maintenance for the affected data structures. However, since
2171 the memory where the data structures are allocated is cacheable, the overhead is
2172 mostly mitigated by an increase in performance.
2173
2174 There is however a performance impact for bakery locks, due to:
2175
2176 - Additional cache maintenance operations, and
2177 - Multiple cache line reads for each lock operation, since the bakery locks
2178 for each CPU are distributed across different cache lines.
2179
2180 The implementation has been optimized to minimize this additional overhead.
2181 Measurements indicate that when bakery locks are allocated in Normal memory, the
2182 minimum latency of acquiring a lock is on an average 3-4 micro seconds whereas
2183 in Device memory the same is 2 micro seconds. The measurements were done on the
2184 Juno Arm development platform.
2185
2186 As mentioned earlier, almost a page of memory can be saved by disabling
2187 ``USE_COHERENT_MEM``. Each platform needs to consider these trade-offs to decide
2188 whether coherent memory should be used. If a platform disables
2189 ``USE_COHERENT_MEM`` and needs to use bakery locks in the porting layer, it can
2190 optionally define macro ``PLAT_PERCPU_BAKERY_LOCK_SIZE`` (see the
2191 `Porting Guide`_). Refer to the reference platform code for examples.
2192
2193 Isolating code and read-only data on separate memory pages
2194 ----------------------------------------------------------
2195
2196 In the Armv8-A VMSA, translation table entries include fields that define the
2197 properties of the target memory region, such as its access permissions. The
2198 smallest unit of memory that can be addressed by a translation table entry is
2199 a memory page. Therefore, if software needs to set different permissions on two
2200 memory regions then it needs to map them using different memory pages.
2201
2202 The default memory layout for each BL image is as follows:
2203
2204 ::
2205
2206 | ... |
2207 +-------------------+
2208 | Read-write data |
2209 +-------------------+ Page boundary
2210 | <Padding> |
2211 +-------------------+
2212 | Exception vectors |
2213 +-------------------+ 2 KB boundary
2214 | <Padding> |
2215 +-------------------+
2216 | Read-only data |
2217 +-------------------+
2218 | Code |
2219 +-------------------+ BLx_BASE
2220
2221 .. note::
2222 The 2KB alignment for the exception vectors is an architectural
2223 requirement.
2224
2225 The read-write data start on a new memory page so that they can be mapped with
2226 read-write permissions, whereas the code and read-only data below are configured
2227 as read-only.
2228
2229 However, the read-only data are not aligned on a page boundary. They are
2230 contiguous to the code. Therefore, the end of the code section and the beginning
2231 of the read-only data one might share a memory page. This forces both to be
2232 mapped with the same memory attributes. As the code needs to be executable, this
2233 means that the read-only data stored on the same memory page as the code are
2234 executable as well. This could potentially be exploited as part of a security
2235 attack.
2236
2237 TF provides the build flag ``SEPARATE_CODE_AND_RODATA`` to isolate the code and
2238 read-only data on separate memory pages. This in turn allows independent control
2239 of the access permissions for the code and read-only data. In this case,
2240 platform code gets a finer-grained view of the image layout and can
2241 appropriately map the code region as executable and the read-only data as
2242 execute-never.
2243
2244 This has an impact on memory footprint, as padding bytes need to be introduced
2245 between the code and read-only data to ensure the segregation of the two. To
2246 limit the memory cost, this flag also changes the memory layout such that the
2247 code and exception vectors are now contiguous, like so:
2248
2249 ::
2250
2251 | ... |
2252 +-------------------+
2253 | Read-write data |
2254 +-------------------+ Page boundary
2255 | <Padding> |
2256 +-------------------+
2257 | Read-only data |
2258 +-------------------+ Page boundary
2259 | <Padding> |
2260 +-------------------+
2261 | Exception vectors |
2262 +-------------------+ 2 KB boundary
2263 | <Padding> |
2264 +-------------------+
2265 | Code |
2266 +-------------------+ BLx_BASE
2267
2268 With this more condensed memory layout, the separation of read-only data will
2269 add zero or one page to the memory footprint of each BL image. Each platform
2270 should consider the trade-off between memory footprint and security.
2271
2272 This build flag is disabled by default, minimising memory footprint. On Arm
2273 platforms, it is enabled.
2274
2275 Publish and Subscribe Framework
2276 -------------------------------
2277
2278 The Publish and Subscribe Framework allows EL3 components to define and publish
2279 events, to which other EL3 components can subscribe.
2280
2281 The following macros are provided by the framework:
2282
2283 - ``REGISTER_PUBSUB_EVENT(event)``: Defines an event, and takes one argument,
2284 the event name, which must be a valid C identifier. All calls to
2285 ``REGISTER_PUBSUB_EVENT`` macro must be placed in the file
2286 ``pubsub_events.h``.
2287
2288 - ``PUBLISH_EVENT_ARG(event, arg)``: Publishes a defined event, by iterating
2289 subscribed handlers and calling them in turn. The handlers will be passed the
2290 parameter ``arg``. The expected use-case is to broadcast an event.
2291
2292 - ``PUBLISH_EVENT(event)``: Like ``PUBLISH_EVENT_ARG``, except that the value
2293 ``NULL`` is passed to subscribed handlers.
2294
2295 - ``SUBSCRIBE_TO_EVENT(event, handler)``: Registers the ``handler`` to
2296 subscribe to ``event``. The handler will be executed whenever the ``event``
2297 is published.
2298
2299 - ``for_each_subscriber(event, subscriber)``: Iterates through all handlers
2300 subscribed for ``event``. ``subscriber`` must be a local variable of type
2301 ``pubsub_cb_t *``, and will point to each subscribed handler in turn during
2302 iteration. This macro can be used for those patterns that none of the
2303 ``PUBLISH_EVENT_*()`` macros cover.
2304
2305 Publishing an event that wasn't defined using ``REGISTER_PUBSUB_EVENT`` will
2306 result in build error. Subscribing to an undefined event however won't.
2307
2308 Subscribed handlers must be of type ``pubsub_cb_t``, with following function
2309 signature:
2310
2311 .. code:: c
2312
2313 typedef void* (*pubsub_cb_t)(const void *arg);
2314
2315 There may be arbitrary number of handlers registered to the same event. The
2316 order in which subscribed handlers are notified when that event is published is
2317 not defined. Subscribed handlers may be executed in any order; handlers should
2318 not assume any relative ordering amongst them.
2319
2320 Publishing an event on a PE will result in subscribed handlers executing on that
2321 PE only; it won't cause handlers to execute on a different PE.
2322
2323 Note that publishing an event on a PE blocks until all the subscribed handlers
2324 finish executing on the PE.
2325
2326 TF-A generic code publishes and subscribes to some events within. Platform
2327 ports are discouraged from subscribing to them. These events may be withdrawn,
2328 renamed, or have their semantics altered in the future. Platforms may however
2329 register, publish, and subscribe to platform-specific events.
2330
2331 Publish and Subscribe Example
2332 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2333
2334 A publisher that wants to publish event ``foo`` would:
2335
2336 - Define the event ``foo`` in the ``pubsub_events.h``.
2337
2338 .. code:: c
2339
2340 REGISTER_PUBSUB_EVENT(foo);
2341
2342 - Depending on the nature of event, use one of ``PUBLISH_EVENT_*()`` macros to
2343 publish the event at the appropriate path and time of execution.
2344
2345 A subscriber that wants to subscribe to event ``foo`` published above would
2346 implement:
2347
2348 .. code:: c
2349
2350 void *foo_handler(const void *arg)
2351 {
2352 void *result;
2353
2354 /* Do handling ... */
2355
2356 return result;
2357 }
2358
2359 SUBSCRIBE_TO_EVENT(foo, foo_handler);
2360
2361
2362 Reclaiming the BL31 initialization code
2363 ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
2364
2365 A significant amount of the code used for the initialization of BL31 is never
2366 needed again after boot time. In order to reduce the runtime memory
2367 footprint, the memory used for this code can be reclaimed after initialization
2368 has finished and be used for runtime data.
2369
2370 The build option ``RECLAIM_INIT_CODE`` can be set to mark this boot time code
2371 with a ``.text.init.*`` attribute which can be filtered and placed suitably
2372 within the BL image for later reclamation by the platform. The platform can
2373 specify the filter and the memory region for this init section in BL31 via the
2374 plat.ld.S linker script. For example, on the FVP, this section is placed
2375 overlapping the secondary CPU stacks so that after the cold boot is done, this
2376 memory can be reclaimed for the stacks. The init memory section is initially
2377 mapped with ``RO``, ``EXECUTE`` attributes. After BL31 initialization has
2378 completed, the FVP changes the attributes of this section to ``RW``,
2379 ``EXECUTE_NEVER`` allowing it to be used for runtime data. The memory attributes
2380 are changed within the ``bl31_plat_runtime_setup`` platform hook. The init
2381 section section can be reclaimed for any data which is accessed after cold
2382 boot initialization and it is upto the platform to make the decision.
2383
2384 Performance Measurement Framework
2385 ---------------------------------
2386
2387 The Performance Measurement Framework (PMF) facilitates collection of
2388 timestamps by registered services and provides interfaces to retrieve them
2389 from within TF-A. A platform can choose to expose appropriate SMCs to
2390 retrieve these collected timestamps.
2391
2392 By default, the global physical counter is used for the timestamp
2393 value and is read via ``CNTPCT_EL0``. The framework allows to retrieve
2394 timestamps captured by other CPUs.
2395
2396 Timestamp identifier format
2397 ~~~~~~~~~~~~~~~~~~~~~~~~~~~
2398
2399 A PMF timestamp is uniquely identified across the system via the
2400 timestamp ID or ``tid``. The ``tid`` is composed as follows:
2401
2402 ::
2403
2404 Bits 0-7: The local timestamp identifier.
2405 Bits 8-9: Reserved.
2406 Bits 10-15: The service identifier.
2407 Bits 16-31: Reserved.
2408
2409 #. The service identifier. Each PMF service is identified by a
2410 service name and a service identifier. Both the service name and
2411 identifier are unique within the system as a whole.
2412
2413 #. The local timestamp identifier. This identifier is unique within a given
2414 service.
2415
2416 Registering a PMF service
2417 ~~~~~~~~~~~~~~~~~~~~~~~~~
2418
2419 To register a PMF service, the ``PMF_REGISTER_SERVICE()`` macro from ``pmf.h``
2420 is used. The arguments required are the service name, the service ID,
2421 the total number of local timestamps to be captured and a set of flags.
2422
2423 The ``flags`` field can be specified as a bitwise-OR of the following values:
2424
2425 ::
2426
2427 PMF_STORE_ENABLE: The timestamp is stored in memory for later retrieval.
2428 PMF_DUMP_ENABLE: The timestamp is dumped on the serial console.
2429
2430 The ``PMF_REGISTER_SERVICE()`` reserves memory to store captured
2431 timestamps in a PMF specific linker section at build time.
2432 Additionally, it defines necessary functions to capture and
2433 retrieve a particular timestamp for the given service at runtime.
2434
2435 The macro ``PMF_REGISTER_SERVICE()`` only enables capturing PMF timestamps
2436 from within TF-A. In order to retrieve timestamps from outside of TF-A, the
2437 ``PMF_REGISTER_SERVICE_SMC()`` macro must be used instead. This macro
2438 accepts the same set of arguments as the ``PMF_REGISTER_SERVICE()``
2439 macro but additionally supports retrieving timestamps using SMCs.
2440
2441 Capturing a timestamp
2442 ~~~~~~~~~~~~~~~~~~~~~
2443
2444 PMF timestamps are stored in a per-service timestamp region. On a
2445 system with multiple CPUs, each timestamp is captured and stored
2446 in a per-CPU cache line aligned memory region.
2447
2448 Having registered the service, the ``PMF_CAPTURE_TIMESTAMP()`` macro can be
2449 used to capture a timestamp at the location where it is used. The macro
2450 takes the service name, a local timestamp identifier and a flag as arguments.
2451
2452 The ``flags`` field argument can be zero, or ``PMF_CACHE_MAINT`` which
2453 instructs PMF to do cache maintenance following the capture. Cache
2454 maintenance is required if any of the service's timestamps are captured
2455 with data cache disabled.
2456
2457 To capture a timestamp in assembly code, the caller should use
2458 ``pmf_calc_timestamp_addr`` macro (defined in ``pmf_asm_macros.S``) to
2459 calculate the address of where the timestamp would be stored. The
2460 caller should then read ``CNTPCT_EL0`` register to obtain the timestamp
2461 and store it at the determined address for later retrieval.
2462
2463 Retrieving a timestamp
2464 ~~~~~~~~~~~~~~~~~~~~~~
2465
2466 From within TF-A, timestamps for individual CPUs can be retrieved using either
2467 ``PMF_GET_TIMESTAMP_BY_MPIDR()`` or ``PMF_GET_TIMESTAMP_BY_INDEX()`` macros.
2468 These macros accept the CPU's MPIDR value, or its ordinal position
2469 respectively.
2470
2471 From outside TF-A, timestamps for individual CPUs can be retrieved by calling
2472 into ``pmf_smc_handler()``.
2473
2474 ::
2475
2476 Interface : pmf_smc_handler()
2477 Argument : unsigned int smc_fid, u_register_t x1,
2478 u_register_t x2, u_register_t x3,
2479 u_register_t x4, void *cookie,
2480 void *handle, u_register_t flags
2481 Return : uintptr_t
2482
2483 smc_fid: Holds the SMC identifier which is either `PMF_SMC_GET_TIMESTAMP_32`
2484 when the caller of the SMC is running in AArch32 mode
2485 or `PMF_SMC_GET_TIMESTAMP_64` when the caller is running in AArch64 mode.
2486 x1: Timestamp identifier.
2487 x2: The `mpidr` of the CPU for which the timestamp has to be retrieved.
2488 This can be the `mpidr` of a different core to the one initiating
2489 the SMC. In that case, service specific cache maintenance may be
2490 required to ensure the updated copy of the timestamp is returned.
2491 x3: A flags value that is either 0 or `PMF_CACHE_MAINT`. If
2492 `PMF_CACHE_MAINT` is passed, then the PMF code will perform a
2493 cache invalidate before reading the timestamp. This ensures
2494 an updated copy is returned.
2495
2496 The remaining arguments, ``x4``, ``cookie``, ``handle`` and ``flags`` are unused
2497 in this implementation.
2498
2499 PMF code structure
2500 ~~~~~~~~~~~~~~~~~~
2501
2502 #. ``pmf_main.c`` consists of core functions that implement service registration,
2503 initialization, storing, dumping and retrieving timestamps.
2504
2505 #. ``pmf_smc.c`` contains the SMC handling for registered PMF services.
2506
2507 #. ``pmf.h`` contains the public interface to Performance Measurement Framework.
2508
2509 #. ``pmf_asm_macros.S`` consists of macros to facilitate capturing timestamps in
2510 assembly code.
2511
2512 #. ``pmf_helpers.h`` is an internal header used by ``pmf.h``.
2513
2514 Armv8-A Architecture Extensions
2515 -------------------------------
2516
2517 TF-A makes use of Armv8-A Architecture Extensions where applicable. This
2518 section lists the usage of Architecture Extensions, and build flags
2519 controlling them.
2520
2521 In general, and unless individually mentioned, the build options
2522 ``ARM_ARCH_MAJOR`` and ``ARM_ARCH_MINOR`` select the Architecture Extension to
2523 target when building TF-A. Subsequent Arm Architecture Extensions are backward
2524 compatible with previous versions.
2525
2526 The build system only requires that ``ARM_ARCH_MAJOR`` and ``ARM_ARCH_MINOR`` have a
2527 valid numeric value. These build options only control whether or not
2528 Architecture Extension-specific code is included in the build. Otherwise, TF-A
2529 targets the base Armv8.0-A architecture; i.e. as if ``ARM_ARCH_MAJOR`` == 8
2530 and ``ARM_ARCH_MINOR`` == 0, which are also their respective default values.
2531
2532 See also the *Summary of build options* in `User Guide`_.
2533
2534 For details on the Architecture Extension and available features, please refer
2535 to the respective Architecture Extension Supplement.
2536
2537 Armv8.1-A
2538 ~~~~~~~~~
2539
2540 This Architecture Extension is targeted when ``ARM_ARCH_MAJOR`` >= 8, or when
2541 ``ARM_ARCH_MAJOR`` == 8 and ``ARM_ARCH_MINOR`` >= 1.
2542
2543 - By default, a load-/store-exclusive instruction pair is used to implement
2544 spinlocks. The ``USE_SPINLOCK_CAS`` build option when set to 1 selects the
2545 spinlock implementation using the ARMv8.1-LSE Compare and Swap instruction.
2546 Notice this instruction is only available in AArch64 execution state, so
2547 the option is only available to AArch64 builds.
2548
2549 Armv8.2-A
2550 ~~~~~~~~~
2551
2552 - The presence of ARMv8.2-TTCNP is detected at runtime. When it is present, the
2553 Common not Private (TTBRn_ELx.CnP) bit is enabled to indicate that multiple
2554 Processing Elements in the same Inner Shareable domain use the same
2555 translation table entries for a given stage of translation for a particular
2556 translation regime.
2557
2558 Armv8.3-A
2559 ~~~~~~~~~
2560
2561 - Pointer authentication features of Armv8.3-A are unconditionally enabled in
2562 the Non-secure world so that lower ELs are allowed to use them without
2563 causing a trap to EL3.
2564
2565 In order to enable the Secure world to use it, ``CTX_INCLUDE_PAUTH_REGS``
2566 must be set to 1. This will add all pointer authentication system registers
2567 to the context that is saved when doing a world switch.
2568
2569 The TF-A itself has support for pointer authentication at runtime
2570 that can be enabled by setting ``BRANCH_PROTECTION`` option to non-zero and
2571 ``CTX_INCLUDE_PAUTH_REGS`` to 1. This enables pointer authentication in BL1,
2572 BL2, BL31, and the TSP if it is used.
2573
2574 These options are experimental features.
2575
2576 Note that Pointer Authentication is enabled for Non-secure world irrespective
2577 of the value of these build flags if the CPU supports it.
2578
2579 If ``ARM_ARCH_MAJOR == 8`` and ``ARM_ARCH_MINOR >= 3`` the code footprint of
2580 enabling PAuth is lower because the compiler will use the optimized
2581 PAuth instructions rather than the backwards-compatible ones.
2582
2583 Armv8.5-A
2584 ~~~~~~~~~
2585
2586 - Branch Target Identification feature is selected by ``BRANCH_PROTECTION``
2587 option set to 1. This option defaults to 0 and this is an experimental
2588 feature.
2589
2590 - Memory Tagging Extension feature is unconditionally enabled for both worlds
2591 (at EL0 and S-EL0) if it is only supported at EL0. If instead it is
2592 implemented at all ELs, it is unconditionally enabled for only the normal
2593 world. To enable it for the secure world as well, the build option
2594 ``CTX_INCLUDE_MTE_REGS`` is required. If the hardware does not implement
2595 MTE support at all, it is always disabled, no matter what build options
2596 are used.
2597
2598 Armv7-A
2599 ~~~~~~~
2600
2601 This Architecture Extension is targeted when ``ARM_ARCH_MAJOR`` == 7.
2602
2603 There are several Armv7-A extensions available. Obviously the TrustZone
2604 extension is mandatory to support the TF-A bootloader and runtime services.
2605
2606 Platform implementing an Armv7-A system can to define from its target
2607 Cortex-A architecture through ``ARM_CORTEX_A<X> = yes`` in their
2608 ``platform.mk`` script. For example ``ARM_CORTEX_A15=yes`` for a
2609 Cortex-A15 target.
2610
2611 Platform can also set ``ARM_WITH_NEON=yes`` to enable neon support.
2612 Note that using neon at runtime has constraints on non secure wolrd context.
2613 TF-A does not yet provide VFP context management.
2614
2615 Directive ``ARM_CORTEX_A<x>`` and ``ARM_WITH_NEON`` are used to set
2616 the toolchain target architecture directive.
2617
2618 Platform may choose to not define straight the toolchain target architecture
2619 directive by defining ``MARCH32_DIRECTIVE``.
2620 I.e:
2621
2622 .. code:: make
2623
2624 MARCH32_DIRECTIVE := -mach=armv7-a
2625
2626 Code Structure
2627 --------------
2628
2629 TF-A code is logically divided between the three boot loader stages mentioned
2630 in the previous sections. The code is also divided into the following
2631 categories (present as directories in the source code):
2632
2633 - **Platform specific.** Choice of architecture specific code depends upon
2634 the platform.
2635 - **Common code.** This is platform and architecture agnostic code.
2636 - **Library code.** This code comprises of functionality commonly used by all
2637 other code. The PSCI implementation and other EL3 runtime frameworks reside
2638 as Library components.
2639 - **Stage specific.** Code specific to a boot stage.
2640 - **Drivers.**
2641 - **Services.** EL3 runtime services (eg: SPD). Specific SPD services
2642 reside in the ``services/spd`` directory (e.g. ``services/spd/tspd``).
2643
2644 Each boot loader stage uses code from one or more of the above mentioned
2645 categories. Based upon the above, the code layout looks like this:
2646
2647 ::
2648
2649 Directory Used by BL1? Used by BL2? Used by BL31?
2650 bl1 Yes No No
2651 bl2 No Yes No
2652 bl31 No No Yes
2653 plat Yes Yes Yes
2654 drivers Yes No Yes
2655 common Yes Yes Yes
2656 lib Yes Yes Yes
2657 services No No Yes
2658
2659 The build system provides a non configurable build option IMAGE_BLx for each
2660 boot loader stage (where x = BL stage). e.g. for BL1 , IMAGE_BL1 will be
2661 defined by the build system. This enables TF-A to compile certain code only
2662 for specific boot loader stages
2663
2664 All assembler files have the ``.S`` extension. The linker source files for each
2665 boot stage have the extension ``.ld.S``. These are processed by GCC to create the
2666 linker scripts which have the extension ``.ld``.
2667
2668 FDTs provide a description of the hardware platform and are used by the Linux
2669 kernel at boot time. These can be found in the ``fdts`` directory.
2670
2671 References
2672 ----------
2673
2674 .. [#] `Trusted Board Boot Requirements CLIENT (TBBR-CLIENT) Armv8-A (ARM DEN0006D)`_
2675 .. [#] `Power State Coordination Interface PDD`_
2676 .. [#] `SMC Calling Convention PDD`_
2677 .. [#] `TF-A Interrupt Management Design guide`_.
2678
2679 --------------
2680
2681 *Copyright (c) 2013-2019, Arm Limited and Contributors. All rights reserved.*
2682
2683 .. _Reset Design: ./reset-design.rst
2684 .. _Porting Guide: ../getting_started/porting-guide.rst
2685 .. _Firmware Update: ../components/firmware-update.rst
2686 .. _PSCI PDD: http://infocenter.arm.com/help/topic/com.arm.doc.den0022d/Power_State_Coordination_Interface_PDD_v1_1_DEN0022D.pdf
2687 .. _SMC calling convention PDD: http://infocenter.arm.com/help/topic/com.arm.doc.den0028b/ARM_DEN0028B_SMC_Calling_Convention.pdf
2688 .. _PSCI Library integration guide: ../getting_started/psci-lib-integration-guide.rst
2689 .. _SMCCC: http://infocenter.arm.com/help/topic/com.arm.doc.den0028b/ARM_DEN0028B_SMC_Calling_Convention.pdf
2690 .. _PSCI: http://infocenter.arm.com/help/topic/com.arm.doc.den0022d/Power_State_Coordination_Interface_PDD_v1_1_DEN0022D.pdf
2691 .. _Power State Coordination Interface PDD: http://infocenter.arm.com/help/topic/com.arm.doc.den0022d/Power_State_Coordination_Interface_PDD_v1_1_DEN0022D.pdf
2692 .. _here: ../getting_started/psci-lib-integration-guide.rst
2693 .. _CPU specific build macros: ./cpu-specific-build-macros.rst
2694 .. _CPUBM: ./cpu-specific-build-macros.rst
2695 .. _Arm ARM: http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.ddi0487a.e/index.html
2696 .. _User Guide: ../getting_started/user-guide.rst
2697 .. _SMC Calling Convention PDD: http://infocenter.arm.com/help/topic/com.arm.doc.den0028b/ARM_DEN0028B_SMC_Calling_Convention.pdf
2698 .. _TF-A Interrupt Management Design guide: ./interrupt-framework-design.rst
2699 .. _Translation tables design: ../components/xlat-tables-lib-v2-design.rst
2700 .. _Exception Handling Framework: ../components/exception-handling.rst
2701 .. _ROMLIB Design: ../components/romlib-design.rst
2702 .. _Trusted Board Boot Requirements CLIENT (TBBR-CLIENT) Armv8-A (ARM DEN0006D): https://developer.arm.com/docs/den0006/latest/trusted-board-boot-requirements-client-tbbr-client-armv8-a
2703
2704 .. |Image 1| image:: ../resources/diagrams/rt-svc-descs-layout.png