layerscape: add ls1088ardb device support
[openwrt/openwrt.git] / target / linux / layerscape / patches-4.4 / 7198-staging-fsl-mc-dpio-services-driver.patch
1 From 331b26080961f0289c3a8a8e5e65f6524b23be19 Mon Sep 17 00:00:00 2001
2 From: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com>
3 Date: Tue, 7 Apr 2015 23:24:55 -0400
4 Subject: [PATCH 198/226] staging: fsl-mc: dpio services driver
5 MIME-Version: 1.0
6 Content-Type: text/plain; charset=UTF-8
7 Content-Transfer-Encoding: 8bit
8
9 This is a commit of a squash of the cummulative dpio services patches
10 in the sdk 2.0 kernel as of 3/7/2016.
11
12 staging: fsl-mc: dpio: initial implementation of dpio services
13
14 * Port from kernel 3.16 to 3.19
15 * upgrade to match MC fw 7.0.0
16 * return -EPROBE_DEFER if fsl_mc_portal_allocate() fails.
17 * enable DPIO interrupt support
18 * implement service FQDAN handling
19 * DPIO service selects DPIO objects using crude algorithms for now, we
20 will look to make this smarter later on.
21 * Locks all DPIO ops that aren't innately lockless. Smarter selection
22 logic may allow locking to be relaxed eventually.
23 * Portable QBMan driver source (and low-level MC flib code for DPIO) is
24 included and encapsulated within the DPIO driver.
25
26 Signed-off-by: Geoff Thorpe <Geoff.Thorpe@freescale.com>
27 Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
28 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
29 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
30 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
31 Signed-off-by: Cristian Sovaiala <cristian.sovaiala@freescale.com>
32 Signed-off-by: J. German Rivera <German.Rivera@freescale.com>
33 Signed-off-by: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com>
34 [Stuart: resolved merge conflicts]
35 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
36
37 dpio: Use locks when querying fq state
38
39 merged from patch in 3.19-bringup branch.
40
41 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
42 Signed-off-by: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com>
43 Change-Id: Ia4d09f8a0cf4d8a4a2aa1cb39be789c34425286d
44 Reviewed-on: http://git.am.freescale.net:8181/34707
45 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
46 Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
47 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
48
49 qbman: Fix potential race in VDQCR handling
50
51 Remove atomic_read() check of the VDQCR busy marker. These checks were racy
52 as the flag could be incorrectly cleared if checked while another thread was
53 starting a pull command. The check is unneeded since we can determine the
54 owner of the outstanding pull command through other means.
55
56 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
57 Change-Id: Icc64577c0a4ce6dadef208975e980adfc6796c86
58 Reviewed-on: http://git.am.freescale.net:8181/34705
59 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
60 Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
61 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
62 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
63
64 dpio: Fix IRQ handler and remove useless spinlock
65
66 The IRQ handler for a threaded IRQ requires two parts: initally the handler
67 should check status and inhibit the IRQ then the threaded portion should
68 process and reenable.
69
70 Also remove a spinlock that was redundant with the QMan driver and a debug
71 check that could trigger under a race condition
72
73 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
74 Signed-off-by: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com>
75 Change-Id: I64926583af0be954228de94ae354fa005c8ec88a
76 Reviewed-on: http://git.am.freescale.net:8181/34706
77 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
78 Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
79 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
80 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
81
82 staging: fsl-mc: dpio: Implement polling if IRQ not available
83
84 Temporarly add a polling mode to DPIO in the case that the IRQ
85 registration fails
86
87 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
88 Change-Id: Iebbd488fd14dd9878ef846e40f3ebcbcd0eb1e80
89 Reviewed-on: http://git.am.freescale.net:8181/34775
90 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
91 Reviewed-by: Jeffrey Ladouceur <Jeffrey.Ladouceur@freescale.com>
92 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
93
94 fsl-mc-dpio: Fix to make this work without interrupt
95
96 Some additional fixes to make dpio driver work in poll mode.
97 This is needed for direct assignment to KVM Guest.
98
99 Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
100 Change-Id: Icf66b8c0c7f7e1610118f78396534c067f594934
101 Reviewed-on: http://git.am.freescale.net:8181/35333
102 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
103 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
104 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
105
106 fsl-mc-dpio: Make QBMan token tracking internal
107
108 Previousy the QBMan portal code required the caller to properly set and
109 check for a token value used by the driver to detect when the QMan
110 hardware had completed a dequeue. This patch simplifes the driver
111 interface by internally dealing with token values. The driver will now
112 set the token value to 0 once it has dequeued a frame while a token
113 value of 1 indicates the HW has completed the dequeue but SW has not
114 consumed the frame yet.
115
116 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
117 Change-Id: If94d9728b0faa0fd79b47108f5cb05a425b89c18
118 Reviewed-on: http://git.am.freescale.net:8181/35433
119 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
120 Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
121 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
122
123 fsl-mc-dpio: Distribute DPIO IRQs among cores
124
125 Configure the DPIO IRQ affinities across all available cores
126
127 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
128 Change-Id: Ib45968a070460b7e9410bfe6067b20ecd3524c54
129 Reviewed-on: http://git.am.freescale.net:8181/35540
130 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
131 Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
132 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
133 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
134
135 dpio/qbman: add flush after finishing cena write
136
137 Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
138 Change-Id: I19537f101f7f5b443d60c0ad0e5d96c1dc302223
139 Reviewed-on: http://git.am.freescale.net:8181/35854
140 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
141 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
142 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
143
144 dpio/qbman: rename qbman_dq_entry to qbman_result
145
146 Currently qbman_dq_entry is used for both dq result in dqrr
147 and memory, and notifications in dqrr and memory. It doesn't
148 make sense to have dq_entry in name for those notifications
149 which have nothing to do with dq. So we rename this as
150 qbman_result which is meaningful for both cases.
151
152 Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
153 Change-Id: I62b3e729c571a1195e8802a9fab3fca97a14eae4
154 Reviewed-on: http://git.am.freescale.net:8181/35535
155 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
156 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
157 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
158
159 dpio/qbman: add APIs to parse BPSCN and CGCU
160
161 BPSCN and CGCU are notifications which can only be written to memory.
162 We need to consider the host endianness while parsing these notification.
163 Also modify the check of FQRN/CSCN_MEM with the same consideration.
164
165 Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
166 Change-Id: I572e0aa126107aed40e1ce326d5df7956882a939
167 Reviewed-on: http://git.am.freescale.net:8181/35536
168 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
169 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
170 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
171
172 dpio/qbman: remove EXPORT_SYMBOL for qbman APIs
173
174 because they are only used by dpio.
175
176 Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
177 Change-Id: I12e7b81c2d32f3c7b3df9fd73b742b1b675f4b8b
178 Reviewed-on: http://git.am.freescale.net:8181/35537
179 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
180 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
181 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
182
183 dpio/qbman: add invalidate and prefetch support
184
185 for cachable memory access.
186 Also remove the redundant memory barriers.
187
188 Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
189 Change-Id: I452a768278d1c5ef37e5741e9b011d725cb57b30
190 Reviewed-on: http://git.am.freescale.net:8181/35873
191 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
192 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
193 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
194
195 dpio-driver: Fix qman-portal interrupt masking in poll mode
196
197 DPIO driver should mask qman-portal interrupt reporting When
198 working in poll mode. has_irq flag is used for same, but
199 interrupt maksing was happening before it was decided that
200 system will work in poll mode of interrupt mode.
201
202 This patch fixes the issue and not irq masking/enabling is
203 happening after irq/poll mode is decided.
204
205 Signed-off-by: Bharat Bhushan <Bharat.Bhushan@freescale.com>
206 Change-Id: I44de07b6142e80b3daea45e7d51a2d2799b2ed8d
207 Reviewed-on: http://git.am.freescale.net:8181/37100
208 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
209 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
210 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
211 (cherry picked from commit 3579244250dcb287a0fe58bcc3b3780076d040a2)
212
213 dpio: Add a function to query buffer pool depth
214
215 Add a debug function thay allows users to query the number
216 of buffers in a specific buffer pool
217
218 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
219 Change-Id: Ie9a5f2e86d6a04ae61868bcc807121780c53cf6c
220 Reviewed-on: http://git.am.freescale.net:8181/36069
221 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
222 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
223 (cherry picked from commit 3c749d860592f62f6b219232580ca35fd1075337)
224
225 dpio: Use normal cachable non-shareable memory for qbman cena
226
227 QBMan SWP CENA portal memory requires the memory to be cacheable,
228 and non-shareable.
229
230 Signed-off-by: Haiying Wang <Haiying.Wang@freescale.com>
231 Change-Id: I1c01cffe9ff2503fea2396d7cc761508f6e1ca85
232 Reviewed-on: http://git.am.freescale.net:8181/35487
233 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
234 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
235 (cherry picked from commit 2a7e1ede7e155d9219006999893912e0b029ce4c)
236
237 fsl-dpio: Process frames in IRQ context
238
239 Stop using threaded IRQs and move back to hardirq top-halves.
240 This is the first patch of a small series adapting the DPIO and Ethernet
241 code to these changes.
242
243 Signed-off-by: Roy Pledge <roy.pledge@freescale.com>
244 Tested-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
245 Tested-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
246 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
247 Reviewed-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
248 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
249 [Stuart: split out dpaa-eth part separately]
250 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
251
252 fsl-dpio: Fast DPIO object selection
253
254 The DPIO service code had a couple of problems with performance impact:
255 - The DPIO service object was protected by a global lock, within
256 functions called from the fast datapath on multiple CPUs.
257 - The DPIO service code would iterate unnecessarily through its linked
258 list, while most of the time it looks for CPU-bound objects.
259
260 Add a fast-access array pointing to the same dpaa_io objects as the DPIO
261 service's linked list, used in non-preemptible contexts.
262 Avoid list access/reordering if a specific CPU was requested. This
263 greatly limits contention on the global service lock.
264 Make explicit calls for per-CPU DPIO service objects if the current
265 context permits (which is the case on most of the Ethernet fastpath).
266
267 These changes incidentally fix a functional problem, too: according to
268 the specification of struct dpaa_io_notification_ctx, registration should
269 fail if the specification of 'desired_cpu' cannot be observed. Instead,
270 dpaa_io_service_register() would keep searching for non-affine DPIO
271 objects, even when that was not requested.
272
273 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
274 Change-Id: I2dd78bc56179f97d3fd78052a653456e5f89ed82
275 Reviewed-on: http://git.am.freescale.net:8181/37689
276 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
277 Reviewed-by: Roy Pledge <roy.pledge@freescale.com>
278 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
279
280 DPIO: Implement a missing lock in DPIO
281
282 Implement missing DPIO service notification deregistration lock
283
284 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
285 Change-Id: Ida9a4d00cc3a66bc215c260a8df2b197366736f7
286 Reviewed-on: http://git.am.freescale.net:8181/38497
287 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
288 Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
289 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
290
291 staging: fsl-mc: migrated dpio flibs for MC fw 8.0.0
292
293 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
294
295 fsl_qbman: Ensure SDQCR is only enabled if a channel is selected
296
297 QMan HW considers an SDQCR command that does not indicate any
298 channels to dequeue from to be an error. This change ensures that
299 a NULL command is set in the case no channels are selected for dequeue
300
301 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
302 Change-Id: I8861304881885db00df4a29d760848990d706c70
303 Reviewed-on: http://git.am.freescale.net:8181/38498
304 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
305 Reviewed-by: Haiying Wang <Haiying.Wang@freescale.com>
306 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
307
308 flib: dpio: Fix compiler warning.
309
310 Gcc takes the credit here.
311 To be merged with other fixes on this branch.
312
313 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
314 Change-Id: If81f35ab3e8061aae1e03b72ab16a4c1dc390c3a
315 Reviewed-on: http://git.am.freescale.net:8181/39148
316 Tested-by: Review Code-CDREVIEW <CDREVIEW@freescale.com>
317 Reviewed-by: Stuart Yoder <stuart.yoder@freescale.com>
318
319 staging: fsl-mc: dpio: remove programing of MSIs in dpio driver
320
321 this is now handled in the bus driver
322
323 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
324
325 fsl_qbman: Enable CDAN generation
326
327 Enable CDAN notificiation registration in both QBMan and DPIO
328
329 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
330
331 fsl_dpio: Implement API to dequeue from a channel
332
333 Implement an API that allows users to dequeue from a channel
334
335 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
336
337 fsl-dpio: Change dequeue command type
338
339 For now CDANs don't work with priority precedence.
340
341 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
342
343 fsl-dpio: Export FQD context getter function
344
345 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
346
347 fsl_dpio: Fix DPIO polling thread logic
348
349 Fix the logic for the DPIO polling logic and ensure the thread
350 is not parked
351
352 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
353 [Stuart: fixed typo in comment]
354 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
355
356 fsl-dpio,qbman: Export functions
357
358 A few of the functions used by the Ethernet driver were not exported
359 yet. Needed in order to compile Eth driver as a module.
360
361 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
362 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
363
364 fsl_qbman: Use proper accessors when reading QBMan portals
365
366 Use accessors that properly byteswap when accessing QBMan portals
367
368 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
369
370 fsl_qbman: Fix encoding of 64 byte values
371
372 The QBMan driver encodes commands in 32 bit host endianess then
373 coverts to little endian before sending to HW. This means 64
374 byte values need to be encoded so that the values will be
375 correctly swapped when the commands are written to HW.
376
377 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
378
379 dpaa_fd: Add functions for SG entries endianness conversions
380
381 Scatter gather entries are little endian at the hardware level.
382 Add functions for converting the SG entry structure to cpu
383 endianness to avoid incorrect behaviour on BE kernels.
384
385 Signed-off-by: Ioana Radulescu <ruxandra.radulescu@freescale.com>
386
387 fsl_dpaa: update header files with kernel-doc format
388
389 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
390
391 qbman: update header fiels to follow kernel-doc format
392
393 Plus rename orp_id as opr_id based on the BG.
394
395 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
396
397 fsl/dpio: rename ldpaa to dpaa2
398
399 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
400 (Stuart: removed eth part out into separate patch)
401 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
402
403 qbman_test: update qbman_test
404
405 - Update to sync with latest change in qbman driver.
406 - Add bpscn test case
407
408 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
409
410 fsl-dpio: add FLE (Frame List Entry) for FMT=dpaa_fd_list support
411
412 Signed-off-by: Horia Geantă <horia.geanta@freescale.com>
413
414 fsl-dpio: add accessors for FD[FRC]
415
416 Signed-off-by: Horia Geantă <horia.geanta@freescale.com>
417
418 fsl-dpio: add accessors for FD[FLC]
419
420 Signed-off-by: Horia Geantă <horia.geanta@freescale.com>
421 (Stuart: corrected typo in subject)
422 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
423
424 fsl/dpio: dpaa2_fd: Add the comments for newly added APIs.
425
426 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
427 [Stuart: added fsl/dpio prefix on commit subject]
428 Signed-off-by: Stuart Yoder <stuart.yoder@freescale.com>
429
430 fsl-dpio: rename dpaa_* structure to dpaa2_*
431
432 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
433 (Stuart: split eth and caam parts out into separate patches)
434 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
435
436 fsl-dpio: update the header file with more description in comments
437
438 plus fix some typos.
439
440 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
441 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
442
443 fsl-dpio: fix Klocwork issues.
444
445 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
446
447 fsl_dpio: Fix kernel doc issues and add an overview
448
449 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
450
451 fsl-dpio,qbman: Prefer affine portal to acquire/release buffers
452
453 The FQ enqueue/dequeue DPIO code attempts to select an affine QBMan
454 portal in order to minimize contention (under the assumption that most
455 of the calling code runs in affine contexts). Doing the same now for
456 buffer acquire/release.
457
458 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
459
460 fsl-dpio: prefer affine QBMan portal in dpaa2_io_service_enqueue_fq
461
462 Commit 7b057d9bc3d31 ("fsl-dpio: Fast DPIO object selection")
463 took care of dpaa2_io_service_enqueue_qd, missing
464 dpaa2_io_service_enqueue_fq.
465
466 Cc: Bogdan Hamciuc <bogdan.hamciuc@freescale.com>
467 Signed-off-by: Horia Geantă <horia.geanta@freescale.com>
468
469 fsl/dpio: update the dpio flib files from mc9.0.0 release
470
471 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
472
473 fsl/dpio: pass qman_version from dpio attributes to swp desc
474
475 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
476
477 fsl/dpio/qbman: Use qman version to determin dqrr size
478
479 Signed-off-by: Haiying Wang <Haiying.wang@freescale.com>
480
481 fsl-dpio: Fix dequeue type enum values
482
483 enum qbman_pull_type_e did not follow the volatile dequeue command
484 specification, for which VERB=b'00 is a valid value (but of no
485 interest to us).
486
487 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
488 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
489
490 fsl-dpio: Volatile dequeue with priority precedence
491
492 Use priority precedence to do volatile dequeue from channels, rather
493 than active FQ precedence.
494
495 Signed-off-by: Bogdan Hamciuc <bogdan.hamciuc@nxp.com>
496 Signed-off-by: Roy Pledge <Roy.Pledge@freescale.com>
497
498 Signed-off-by: Stuart Yoder <stuart.yoder@nxp.com>
499 ---
500 drivers/staging/fsl-mc/bus/Kconfig | 16 +
501 drivers/staging/fsl-mc/bus/Makefile | 3 +
502 drivers/staging/fsl-mc/bus/dpio/Makefile | 9 +
503 drivers/staging/fsl-mc/bus/dpio/dpio-drv.c | 405 +++++++
504 drivers/staging/fsl-mc/bus/dpio/dpio-drv.h | 33 +
505 drivers/staging/fsl-mc/bus/dpio/dpio.c | 468 ++++++++
506 drivers/staging/fsl-mc/bus/dpio/dpio_service.c | 801 +++++++++++++
507 drivers/staging/fsl-mc/bus/dpio/fsl_dpio.h | 460 ++++++++
508 drivers/staging/fsl-mc/bus/dpio/fsl_dpio_cmd.h | 184 +++
509 drivers/staging/fsl-mc/bus/dpio/fsl_qbman_base.h | 123 ++
510 drivers/staging/fsl-mc/bus/dpio/fsl_qbman_portal.h | 753 ++++++++++++
511 drivers/staging/fsl-mc/bus/dpio/qbman_debug.c | 846 ++++++++++++++
512 drivers/staging/fsl-mc/bus/dpio/qbman_debug.h | 136 +++
513 drivers/staging/fsl-mc/bus/dpio/qbman_portal.c | 1212 ++++++++++++++++++++
514 drivers/staging/fsl-mc/bus/dpio/qbman_portal.h | 261 +++++
515 drivers/staging/fsl-mc/bus/dpio/qbman_private.h | 173 +++
516 drivers/staging/fsl-mc/bus/dpio/qbman_sys.h | 307 +++++
517 drivers/staging/fsl-mc/bus/dpio/qbman_sys_decl.h | 86 ++
518 drivers/staging/fsl-mc/bus/dpio/qbman_test.c | 664 +++++++++++
519 drivers/staging/fsl-mc/include/fsl_dpaa2_fd.h | 774 +++++++++++++
520 drivers/staging/fsl-mc/include/fsl_dpaa2_io.h | 619 ++++++++++
521 21 files changed, 8333 insertions(+)
522 create mode 100644 drivers/staging/fsl-mc/bus/dpio/Makefile
523 create mode 100644 drivers/staging/fsl-mc/bus/dpio/dpio-drv.c
524 create mode 100644 drivers/staging/fsl-mc/bus/dpio/dpio-drv.h
525 create mode 100644 drivers/staging/fsl-mc/bus/dpio/dpio.c
526 create mode 100644 drivers/staging/fsl-mc/bus/dpio/dpio_service.c
527 create mode 100644 drivers/staging/fsl-mc/bus/dpio/fsl_dpio.h
528 create mode 100644 drivers/staging/fsl-mc/bus/dpio/fsl_dpio_cmd.h
529 create mode 100644 drivers/staging/fsl-mc/bus/dpio/fsl_qbman_base.h
530 create mode 100644 drivers/staging/fsl-mc/bus/dpio/fsl_qbman_portal.h
531 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_debug.c
532 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_debug.h
533 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_portal.c
534 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_portal.h
535 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_private.h
536 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_sys.h
537 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_sys_decl.h
538 create mode 100644 drivers/staging/fsl-mc/bus/dpio/qbman_test.c
539 create mode 100644 drivers/staging/fsl-mc/include/fsl_dpaa2_fd.h
540 create mode 100644 drivers/staging/fsl-mc/include/fsl_dpaa2_io.h
541
542 --- a/drivers/staging/fsl-mc/bus/Kconfig
543 +++ b/drivers/staging/fsl-mc/bus/Kconfig
544 @@ -28,3 +28,19 @@ config FSL_MC_RESTOOL
545 help
546 Driver that provides kernel support for the Freescale Management
547 Complex resource manager user-space tool.
548 +
549 +config FSL_MC_DPIO
550 + tristate "Freescale Data Path I/O (DPIO) driver"
551 + depends on FSL_MC_BUS
552 + help
553 + Driver for Freescale Data Path I/O (DPIO) devices.
554 + A DPIO device provides queue and buffer management facilities
555 + for software to interact with other Data Path devices. This
556 + driver does not expose the DPIO device individually, but
557 + groups them under a service layer API.
558 +
559 +config FSL_QBMAN_DEBUG
560 + tristate "Freescale QBMAN Debug APIs"
561 + depends on FSL_MC_DPIO
562 + help
563 + QBMan debug assistant APIs.
564 --- a/drivers/staging/fsl-mc/bus/Makefile
565 +++ b/drivers/staging/fsl-mc/bus/Makefile
566 @@ -21,3 +21,6 @@ mc-bus-driver-objs := mc-bus.o \
567
568 # MC restool kernel support
569 obj-$(CONFIG_FSL_MC_RESTOOL) += mc-restool.o
570 +
571 +# MC DPIO driver
572 +obj-$(CONFIG_FSL_MC_DPIO) += dpio/
573 --- /dev/null
574 +++ b/drivers/staging/fsl-mc/bus/dpio/Makefile
575 @@ -0,0 +1,9 @@
576 +#
577 +# Freescale DPIO driver
578 +#
579 +
580 +obj-$(CONFIG_FSL_MC_BUS) += fsl-dpio-drv.o
581 +
582 +fsl-dpio-drv-objs := dpio-drv.o dpio_service.o dpio.o qbman_portal.o
583 +
584 +obj-$(CONFIG_FSL_QBMAN_DEBUG) += qbman_debug.o
585 --- /dev/null
586 +++ b/drivers/staging/fsl-mc/bus/dpio/dpio-drv.c
587 @@ -0,0 +1,405 @@
588 +/* Copyright 2014 Freescale Semiconductor Inc.
589 + *
590 + * Redistribution and use in source and binary forms, with or without
591 + * modification, are permitted provided that the following conditions are met:
592 + * * Redistributions of source code must retain the above copyright
593 + * notice, this list of conditions and the following disclaimer.
594 + * * Redistributions in binary form must reproduce the above copyright
595 + * notice, this list of conditions and the following disclaimer in the
596 + * documentation and/or other materials provided with the distribution.
597 + * * Neither the name of Freescale Semiconductor nor the
598 + * names of its contributors may be used to endorse or promote products
599 + * derived from this software without specific prior written permission.
600 + *
601 + *
602 + * ALTERNATIVELY, this software may be distributed under the terms of the
603 + * GNU General Public License ("GPL") as published by the Free Software
604 + * Foundation, either version 2 of that License or (at your option) any
605 + * later version.
606 + *
607 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
608 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
609 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
610 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
611 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
612 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
613 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
614 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
615 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
616 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
617 + */
618 +
619 +#include <linux/types.h>
620 +#include <linux/init.h>
621 +#include <linux/module.h>
622 +#include <linux/platform_device.h>
623 +#include <linux/interrupt.h>
624 +#include <linux/msi.h>
625 +#include <linux/dma-mapping.h>
626 +#include <linux/kthread.h>
627 +#include <linux/delay.h>
628 +
629 +#include "../../include/mc.h"
630 +#include "../../include/fsl_dpaa2_io.h"
631 +
632 +#include "fsl_qbman_portal.h"
633 +#include "fsl_dpio.h"
634 +#include "fsl_dpio_cmd.h"
635 +
636 +#include "dpio-drv.h"
637 +
638 +#define DPIO_DESCRIPTION "DPIO Driver"
639 +
640 +MODULE_LICENSE("Dual BSD/GPL");
641 +MODULE_AUTHOR("Freescale Semiconductor, Inc");
642 +MODULE_DESCRIPTION(DPIO_DESCRIPTION);
643 +
644 +#define MAX_DPIO_IRQ_NAME 16 /* Big enough for "FSL DPIO %d" */
645 +
646 +struct dpio_priv {
647 + struct dpaa2_io *io;
648 + char irq_name[MAX_DPIO_IRQ_NAME];
649 + struct task_struct *thread;
650 +};
651 +
652 +static int dpio_thread(void *data)
653 +{
654 + struct dpaa2_io *io = data;
655 +
656 + while (!kthread_should_stop()) {
657 + int err = dpaa2_io_poll(io);
658 +
659 + if (err) {
660 + pr_err("dpaa2_io_poll() failed\n");
661 + return err;
662 + }
663 + msleep(50);
664 + }
665 + return 0;
666 +}
667 +
668 +static irqreturn_t dpio_irq_handler(int irq_num, void *arg)
669 +{
670 + struct device *dev = (struct device *)arg;
671 + struct dpio_priv *priv = dev_get_drvdata(dev);
672 +
673 + return dpaa2_io_irq(priv->io);
674 +}
675 +
676 +static void unregister_dpio_irq_handlers(struct fsl_mc_device *ls_dev)
677 +{
678 + int i;
679 + struct fsl_mc_device_irq *irq;
680 + int irq_count = ls_dev->obj_desc.irq_count;
681 +
682 + for (i = 0; i < irq_count; i++) {
683 + irq = ls_dev->irqs[i];
684 + devm_free_irq(&ls_dev->dev, irq->msi_desc->irq, &ls_dev->dev);
685 + }
686 +}
687 +
688 +static int register_dpio_irq_handlers(struct fsl_mc_device *ls_dev, int cpu)
689 +{
690 + struct dpio_priv *priv;
691 + unsigned int i;
692 + int error;
693 + struct fsl_mc_device_irq *irq;
694 + unsigned int num_irq_handlers_registered = 0;
695 + int irq_count = ls_dev->obj_desc.irq_count;
696 + cpumask_t mask;
697 +
698 + priv = dev_get_drvdata(&ls_dev->dev);
699 +
700 + if (WARN_ON(irq_count != 1))
701 + return -EINVAL;
702 +
703 + for (i = 0; i < irq_count; i++) {
704 + irq = ls_dev->irqs[i];
705 + error = devm_request_irq(&ls_dev->dev,
706 + irq->msi_desc->irq,
707 + dpio_irq_handler,
708 + 0,
709 + priv->irq_name,
710 + &ls_dev->dev);
711 + if (error < 0) {
712 + dev_err(&ls_dev->dev,
713 + "devm_request_irq() failed: %d\n",
714 + error);
715 + goto error_unregister_irq_handlers;
716 + }
717 +
718 + /* Set the IRQ affinity */
719 + cpumask_clear(&mask);
720 + cpumask_set_cpu(cpu, &mask);
721 + if (irq_set_affinity(irq->msi_desc->irq, &mask))
722 + pr_err("irq_set_affinity failed irq %d cpu %d\n",
723 + irq->msi_desc->irq, cpu);
724 +
725 + num_irq_handlers_registered++;
726 + }
727 +
728 + return 0;
729 +
730 +error_unregister_irq_handlers:
731 + for (i = 0; i < num_irq_handlers_registered; i++) {
732 + irq = ls_dev->irqs[i];
733 + devm_free_irq(&ls_dev->dev, irq->msi_desc->irq,
734 + &ls_dev->dev);
735 + }
736 +
737 + return error;
738 +}
739 +
740 +static int __cold
741 +dpaa2_dpio_probe(struct fsl_mc_device *ls_dev)
742 +{
743 + struct dpio_attr dpio_attrs;
744 + struct dpaa2_io_desc desc;
745 + struct dpio_priv *priv;
746 + int err = -ENOMEM;
747 + struct device *dev = &ls_dev->dev;
748 + struct dpaa2_io *defservice;
749 + bool irq_allocated = false;
750 + static int next_cpu;
751 +
752 + priv = devm_kzalloc(dev, sizeof(*priv), GFP_KERNEL);
753 + if (!priv)
754 + goto err_priv_alloc;
755 +
756 + dev_set_drvdata(dev, priv);
757 +
758 + err = fsl_mc_portal_allocate(ls_dev, 0, &ls_dev->mc_io);
759 + if (err) {
760 + dev_err(dev, "MC portal allocation failed\n");
761 + err = -EPROBE_DEFER;
762 + goto err_mcportal;
763 + }
764 +
765 + err = dpio_open(ls_dev->mc_io, 0, ls_dev->obj_desc.id,
766 + &ls_dev->mc_handle);
767 + if (err) {
768 + dev_err(dev, "dpio_open() failed\n");
769 + goto err_open;
770 + }
771 +
772 + err = dpio_get_attributes(ls_dev->mc_io, 0, ls_dev->mc_handle,
773 + &dpio_attrs);
774 + if (err) {
775 + dev_err(dev, "dpio_get_attributes() failed %d\n", err);
776 + goto err_get_attr;
777 + }
778 + err = dpio_enable(ls_dev->mc_io, 0, ls_dev->mc_handle);
779 + if (err) {
780 + dev_err(dev, "dpio_enable() failed %d\n", err);
781 + goto err_get_attr;
782 + }
783 + pr_info("ce_paddr=0x%llx, ci_paddr=0x%llx, portalid=%d, prios=%d\n",
784 + ls_dev->regions[0].start,
785 + ls_dev->regions[1].start,
786 + dpio_attrs.qbman_portal_id,
787 + dpio_attrs.num_priorities);
788 +
789 + pr_info("ce_size=0x%llx, ci_size=0x%llx\n",
790 + resource_size(&ls_dev->regions[0]),
791 + resource_size(&ls_dev->regions[1]));
792 +
793 + desc.qman_version = dpio_attrs.qbman_version;
794 + /* Build DPIO driver object out of raw MC object */
795 + desc.receives_notifications = dpio_attrs.num_priorities ? 1 : 0;
796 + desc.has_irq = 1;
797 + desc.will_poll = 1;
798 + desc.has_8prio = dpio_attrs.num_priorities == 8 ? 1 : 0;
799 + desc.cpu = next_cpu;
800 + desc.stash_affinity = 1; /* TODO: Figure out how to determine
801 + this setting - will we ever have non-affine
802 + portals where we stash to a platform cache? */
803 + next_cpu = (next_cpu + 1) % num_active_cpus();
804 + desc.dpio_id = ls_dev->obj_desc.id;
805 + desc.regs_cena = ioremap_cache_ns(ls_dev->regions[0].start,
806 + resource_size(&ls_dev->regions[0]));
807 + desc.regs_cinh = ioremap(ls_dev->regions[1].start,
808 + resource_size(&ls_dev->regions[1]));
809 +
810 + err = fsl_mc_allocate_irqs(ls_dev);
811 + if (err) {
812 + dev_err(dev, "DPIO fsl_mc_allocate_irqs failed\n");
813 + desc.has_irq = 0;
814 + } else {
815 + irq_allocated = true;
816 +
817 + snprintf(priv->irq_name, MAX_DPIO_IRQ_NAME, "FSL DPIO %d",
818 + desc.dpio_id);
819 +
820 + err = register_dpio_irq_handlers(ls_dev, desc.cpu);
821 + if (err)
822 + desc.has_irq = 0;
823 + }
824 +
825 + priv->io = dpaa2_io_create(&desc);
826 + if (!priv->io) {
827 + dev_err(dev, "DPIO setup failed\n");
828 + goto err_dpaa2_io_create;
829 + }
830 +
831 + /* If no irq then go to poll mode */
832 + if (desc.has_irq == 0) {
833 + dev_info(dev, "Using polling mode for DPIO %d\n",
834 + desc.dpio_id);
835 + /* goto err_register_dpio_irq; */
836 + /* TEMP: Start polling if IRQ could not
837 + be registered. This will go away once
838 + KVM support for MSI is present */
839 + if (irq_allocated == true)
840 + fsl_mc_free_irqs(ls_dev);
841 +
842 + if (desc.stash_affinity)
843 + priv->thread = kthread_create_on_cpu(dpio_thread,
844 + priv->io,
845 + desc.cpu,
846 + "dpio_aff%u");
847 + else
848 + priv->thread =
849 + kthread_create(dpio_thread,
850 + priv->io,
851 + "dpio_non%u",
852 + dpio_attrs.qbman_portal_id);
853 + if (IS_ERR(priv->thread)) {
854 + dev_err(dev, "DPIO thread failure\n");
855 + err = PTR_ERR(priv->thread);
856 + goto err_dpaa_thread;
857 + }
858 + kthread_unpark(priv->thread);
859 + wake_up_process(priv->thread);
860 + }
861 +
862 + defservice = dpaa2_io_default_service();
863 + err = dpaa2_io_service_add(defservice, priv->io);
864 + dpaa2_io_down(defservice);
865 + if (err) {
866 + dev_err(dev, "DPIO add-to-service failed\n");
867 + goto err_dpaa2_io_add;
868 + }
869 +
870 + dev_info(dev, "dpio: probed object %d\n", ls_dev->obj_desc.id);
871 + dev_info(dev, " receives_notifications = %d\n",
872 + desc.receives_notifications);
873 + dev_info(dev, " has_irq = %d\n", desc.has_irq);
874 + dpio_close(ls_dev->mc_io, 0, ls_dev->mc_handle);
875 + fsl_mc_portal_free(ls_dev->mc_io);
876 + return 0;
877 +
878 +err_dpaa2_io_add:
879 + unregister_dpio_irq_handlers(ls_dev);
880 +/* TEMP: To be restored once polling is removed
881 + err_register_dpio_irq:
882 + fsl_mc_free_irqs(ls_dev);
883 +*/
884 +err_dpaa_thread:
885 +err_dpaa2_io_create:
886 + dpio_disable(ls_dev->mc_io, 0, ls_dev->mc_handle);
887 +err_get_attr:
888 + dpio_close(ls_dev->mc_io, 0, ls_dev->mc_handle);
889 +err_open:
890 + fsl_mc_portal_free(ls_dev->mc_io);
891 +err_mcportal:
892 + dev_set_drvdata(dev, NULL);
893 + devm_kfree(dev, priv);
894 +err_priv_alloc:
895 + return err;
896 +}
897 +
898 +/*
899 + * Tear down interrupts for a given DPIO object
900 + */
901 +static void dpio_teardown_irqs(struct fsl_mc_device *ls_dev)
902 +{
903 + /* (void)disable_dpio_irqs(ls_dev); */
904 + unregister_dpio_irq_handlers(ls_dev);
905 + fsl_mc_free_irqs(ls_dev);
906 +}
907 +
908 +static int __cold
909 +dpaa2_dpio_remove(struct fsl_mc_device *ls_dev)
910 +{
911 + struct device *dev;
912 + struct dpio_priv *priv;
913 + int err;
914 +
915 + dev = &ls_dev->dev;
916 + priv = dev_get_drvdata(dev);
917 +
918 + /* there is no implementation yet for pulling a DPIO object out of a
919 + * running service (and they're currently always running).
920 + */
921 + dev_crit(dev, "DPIO unplugging is broken, the service holds onto it\n");
922 +
923 + if (priv->thread)
924 + kthread_stop(priv->thread);
925 + else
926 + dpio_teardown_irqs(ls_dev);
927 +
928 + err = fsl_mc_portal_allocate(ls_dev, 0, &ls_dev->mc_io);
929 + if (err) {
930 + dev_err(dev, "MC portal allocation failed\n");
931 + goto err_mcportal;
932 + }
933 +
934 + err = dpio_open(ls_dev->mc_io, 0, ls_dev->obj_desc.id,
935 + &ls_dev->mc_handle);
936 + if (err) {
937 + dev_err(dev, "dpio_open() failed\n");
938 + goto err_open;
939 + }
940 +
941 + dev_set_drvdata(dev, NULL);
942 + dpaa2_io_down(priv->io);
943 +
944 + err = 0;
945 +
946 + dpio_disable(ls_dev->mc_io, 0, ls_dev->mc_handle);
947 + dpio_close(ls_dev->mc_io, 0, ls_dev->mc_handle);
948 +err_open:
949 + fsl_mc_portal_free(ls_dev->mc_io);
950 +err_mcportal:
951 + return err;
952 +}
953 +
954 +static const struct fsl_mc_device_match_id dpaa2_dpio_match_id_table[] = {
955 + {
956 + .vendor = FSL_MC_VENDOR_FREESCALE,
957 + .obj_type = "dpio",
958 + .ver_major = DPIO_VER_MAJOR,
959 + .ver_minor = DPIO_VER_MINOR
960 + },
961 + { .vendor = 0x0 }
962 +};
963 +
964 +static struct fsl_mc_driver dpaa2_dpio_driver = {
965 + .driver = {
966 + .name = KBUILD_MODNAME,
967 + .owner = THIS_MODULE,
968 + },
969 + .probe = dpaa2_dpio_probe,
970 + .remove = dpaa2_dpio_remove,
971 + .match_id_table = dpaa2_dpio_match_id_table
972 +};
973 +
974 +static int dpio_driver_init(void)
975 +{
976 + int err;
977 +
978 + err = dpaa2_io_service_driver_init();
979 + if (!err) {
980 + err = fsl_mc_driver_register(&dpaa2_dpio_driver);
981 + if (err)
982 + dpaa2_io_service_driver_exit();
983 + }
984 + return err;
985 +}
986 +static void dpio_driver_exit(void)
987 +{
988 + fsl_mc_driver_unregister(&dpaa2_dpio_driver);
989 + dpaa2_io_service_driver_exit();
990 +}
991 +module_init(dpio_driver_init);
992 +module_exit(dpio_driver_exit);
993 --- /dev/null
994 +++ b/drivers/staging/fsl-mc/bus/dpio/dpio-drv.h
995 @@ -0,0 +1,33 @@
996 +/* Copyright 2014 Freescale Semiconductor Inc.
997 + *
998 + * Redistribution and use in source and binary forms, with or without
999 + * modification, are permitted provided that the following conditions are met:
1000 + * * Redistributions of source code must retain the above copyright
1001 + * notice, this list of conditions and the following disclaimer.
1002 + * * Redistributions in binary form must reproduce the above copyright
1003 + * notice, this list of conditions and the following disclaimer in the
1004 + * documentation and/or other materials provided with the distribution.
1005 + * * Neither the name of Freescale Semiconductor nor the
1006 + * names of its contributors may be used to endorse or promote products
1007 + * derived from this software without specific prior written permission.
1008 + *
1009 + *
1010 + * ALTERNATIVELY, this software may be distributed under the terms of the
1011 + * GNU General Public License ("GPL") as published by the Free Software
1012 + * Foundation, either version 2 of that License or (at your option) any
1013 + * later version.
1014 + *
1015 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
1016 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
1017 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
1018 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
1019 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
1020 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
1021 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
1022 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
1023 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1024 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1025 + */
1026 +
1027 +int dpaa2_io_service_driver_init(void);
1028 +void dpaa2_io_service_driver_exit(void);
1029 --- /dev/null
1030 +++ b/drivers/staging/fsl-mc/bus/dpio/dpio.c
1031 @@ -0,0 +1,468 @@
1032 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
1033 + *
1034 + * Redistribution and use in source and binary forms, with or without
1035 + * modification, are permitted provided that the following conditions are met:
1036 + * * Redistributions of source code must retain the above copyright
1037 + * notice, this list of conditions and the following disclaimer.
1038 + * * Redistributions in binary form must reproduce the above copyright
1039 + * notice, this list of conditions and the following disclaimer in the
1040 + * documentation and/or other materials provided with the distribution.
1041 + * * Neither the name of the above-listed copyright holders nor the
1042 + * names of any contributors may be used to endorse or promote products
1043 + * derived from this software without specific prior written permission.
1044 + *
1045 + *
1046 + * ALTERNATIVELY, this software may be distributed under the terms of the
1047 + * GNU General Public License ("GPL") as published by the Free Software
1048 + * Foundation, either version 2 of that License or (at your option) any
1049 + * later version.
1050 + *
1051 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
1052 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
1053 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
1054 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
1055 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
1056 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
1057 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
1058 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
1059 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
1060 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
1061 + * POSSIBILITY OF SUCH DAMAGE.
1062 + */
1063 +#include "../../include/mc-sys.h"
1064 +#include "../../include/mc-cmd.h"
1065 +#include "fsl_dpio.h"
1066 +#include "fsl_dpio_cmd.h"
1067 +
1068 +int dpio_open(struct fsl_mc_io *mc_io,
1069 + uint32_t cmd_flags,
1070 + int dpio_id,
1071 + uint16_t *token)
1072 +{
1073 + struct mc_command cmd = { 0 };
1074 + int err;
1075 +
1076 + /* prepare command */
1077 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_OPEN,
1078 + cmd_flags,
1079 + 0);
1080 + DPIO_CMD_OPEN(cmd, dpio_id);
1081 +
1082 + /* send command to mc*/
1083 + err = mc_send_command(mc_io, &cmd);
1084 + if (err)
1085 + return err;
1086 +
1087 + /* retrieve response parameters */
1088 + *token = MC_CMD_HDR_READ_TOKEN(cmd.header);
1089 +
1090 + return 0;
1091 +}
1092 +
1093 +int dpio_close(struct fsl_mc_io *mc_io,
1094 + uint32_t cmd_flags,
1095 + uint16_t token)
1096 +{
1097 + struct mc_command cmd = { 0 };
1098 +
1099 + /* prepare command */
1100 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_CLOSE,
1101 + cmd_flags,
1102 + token);
1103 +
1104 + /* send command to mc*/
1105 + return mc_send_command(mc_io, &cmd);
1106 +}
1107 +
1108 +int dpio_create(struct fsl_mc_io *mc_io,
1109 + uint32_t cmd_flags,
1110 + const struct dpio_cfg *cfg,
1111 + uint16_t *token)
1112 +{
1113 + struct mc_command cmd = { 0 };
1114 + int err;
1115 +
1116 + /* prepare command */
1117 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_CREATE,
1118 + cmd_flags,
1119 + 0);
1120 + DPIO_CMD_CREATE(cmd, cfg);
1121 +
1122 + /* send command to mc*/
1123 + err = mc_send_command(mc_io, &cmd);
1124 + if (err)
1125 + return err;
1126 +
1127 + /* retrieve response parameters */
1128 + *token = MC_CMD_HDR_READ_TOKEN(cmd.header);
1129 +
1130 + return 0;
1131 +}
1132 +
1133 +int dpio_destroy(struct fsl_mc_io *mc_io,
1134 + uint32_t cmd_flags,
1135 + uint16_t token)
1136 +{
1137 + struct mc_command cmd = { 0 };
1138 +
1139 + /* prepare command */
1140 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_DESTROY,
1141 + cmd_flags,
1142 + token);
1143 +
1144 + /* send command to mc*/
1145 + return mc_send_command(mc_io, &cmd);
1146 +}
1147 +
1148 +int dpio_enable(struct fsl_mc_io *mc_io,
1149 + uint32_t cmd_flags,
1150 + uint16_t token)
1151 +{
1152 + struct mc_command cmd = { 0 };
1153 +
1154 + /* prepare command */
1155 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_ENABLE,
1156 + cmd_flags,
1157 + token);
1158 +
1159 + /* send command to mc*/
1160 + return mc_send_command(mc_io, &cmd);
1161 +}
1162 +
1163 +int dpio_disable(struct fsl_mc_io *mc_io,
1164 + uint32_t cmd_flags,
1165 + uint16_t token)
1166 +{
1167 + struct mc_command cmd = { 0 };
1168 +
1169 + /* prepare command */
1170 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_DISABLE,
1171 + cmd_flags,
1172 + token);
1173 +
1174 + /* send command to mc*/
1175 + return mc_send_command(mc_io, &cmd);
1176 +}
1177 +
1178 +int dpio_is_enabled(struct fsl_mc_io *mc_io,
1179 + uint32_t cmd_flags,
1180 + uint16_t token,
1181 + int *en)
1182 +{
1183 + struct mc_command cmd = { 0 };
1184 + int err;
1185 + /* prepare command */
1186 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_IS_ENABLED, cmd_flags,
1187 + token);
1188 +
1189 + /* send command to mc*/
1190 + err = mc_send_command(mc_io, &cmd);
1191 + if (err)
1192 + return err;
1193 +
1194 + /* retrieve response parameters */
1195 + DPIO_RSP_IS_ENABLED(cmd, *en);
1196 +
1197 + return 0;
1198 +}
1199 +
1200 +int dpio_reset(struct fsl_mc_io *mc_io,
1201 + uint32_t cmd_flags,
1202 + uint16_t token)
1203 +{
1204 + struct mc_command cmd = { 0 };
1205 +
1206 + /* prepare command */
1207 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_RESET,
1208 + cmd_flags,
1209 + token);
1210 +
1211 + /* send command to mc*/
1212 + return mc_send_command(mc_io, &cmd);
1213 +}
1214 +
1215 +int dpio_set_irq(struct fsl_mc_io *mc_io,
1216 + uint32_t cmd_flags,
1217 + uint16_t token,
1218 + uint8_t irq_index,
1219 + struct dpio_irq_cfg *irq_cfg)
1220 +{
1221 + struct mc_command cmd = { 0 };
1222 +
1223 + /* prepare command */
1224 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_IRQ,
1225 + cmd_flags,
1226 + token);
1227 + DPIO_CMD_SET_IRQ(cmd, irq_index, irq_cfg);
1228 +
1229 + /* send command to mc*/
1230 + return mc_send_command(mc_io, &cmd);
1231 +}
1232 +
1233 +int dpio_get_irq(struct fsl_mc_io *mc_io,
1234 + uint32_t cmd_flags,
1235 + uint16_t token,
1236 + uint8_t irq_index,
1237 + int *type,
1238 + struct dpio_irq_cfg *irq_cfg)
1239 +{
1240 + struct mc_command cmd = { 0 };
1241 + int err;
1242 +
1243 + /* prepare command */
1244 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_IRQ,
1245 + cmd_flags,
1246 + token);
1247 + DPIO_CMD_GET_IRQ(cmd, irq_index);
1248 +
1249 + /* send command to mc*/
1250 + err = mc_send_command(mc_io, &cmd);
1251 + if (err)
1252 + return err;
1253 +
1254 + /* retrieve response parameters */
1255 + DPIO_RSP_GET_IRQ(cmd, *type, irq_cfg);
1256 +
1257 + return 0;
1258 +}
1259 +
1260 +int dpio_set_irq_enable(struct fsl_mc_io *mc_io,
1261 + uint32_t cmd_flags,
1262 + uint16_t token,
1263 + uint8_t irq_index,
1264 + uint8_t en)
1265 +{
1266 + struct mc_command cmd = { 0 };
1267 +
1268 + /* prepare command */
1269 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_IRQ_ENABLE,
1270 + cmd_flags,
1271 + token);
1272 + DPIO_CMD_SET_IRQ_ENABLE(cmd, irq_index, en);
1273 +
1274 + /* send command to mc*/
1275 + return mc_send_command(mc_io, &cmd);
1276 +}
1277 +
1278 +int dpio_get_irq_enable(struct fsl_mc_io *mc_io,
1279 + uint32_t cmd_flags,
1280 + uint16_t token,
1281 + uint8_t irq_index,
1282 + uint8_t *en)
1283 +{
1284 + struct mc_command cmd = { 0 };
1285 + int err;
1286 +
1287 + /* prepare command */
1288 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_IRQ_ENABLE,
1289 + cmd_flags,
1290 + token);
1291 + DPIO_CMD_GET_IRQ_ENABLE(cmd, irq_index);
1292 +
1293 + /* send command to mc*/
1294 + err = mc_send_command(mc_io, &cmd);
1295 + if (err)
1296 + return err;
1297 +
1298 + /* retrieve response parameters */
1299 + DPIO_RSP_GET_IRQ_ENABLE(cmd, *en);
1300 +
1301 + return 0;
1302 +}
1303 +
1304 +int dpio_set_irq_mask(struct fsl_mc_io *mc_io,
1305 + uint32_t cmd_flags,
1306 + uint16_t token,
1307 + uint8_t irq_index,
1308 + uint32_t mask)
1309 +{
1310 + struct mc_command cmd = { 0 };
1311 +
1312 + /* prepare command */
1313 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_IRQ_MASK,
1314 + cmd_flags,
1315 + token);
1316 + DPIO_CMD_SET_IRQ_MASK(cmd, irq_index, mask);
1317 +
1318 + /* send command to mc*/
1319 + return mc_send_command(mc_io, &cmd);
1320 +}
1321 +
1322 +int dpio_get_irq_mask(struct fsl_mc_io *mc_io,
1323 + uint32_t cmd_flags,
1324 + uint16_t token,
1325 + uint8_t irq_index,
1326 + uint32_t *mask)
1327 +{
1328 + struct mc_command cmd = { 0 };
1329 + int err;
1330 +
1331 + /* prepare command */
1332 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_IRQ_MASK,
1333 + cmd_flags,
1334 + token);
1335 + DPIO_CMD_GET_IRQ_MASK(cmd, irq_index);
1336 +
1337 + /* send command to mc*/
1338 + err = mc_send_command(mc_io, &cmd);
1339 + if (err)
1340 + return err;
1341 +
1342 + /* retrieve response parameters */
1343 + DPIO_RSP_GET_IRQ_MASK(cmd, *mask);
1344 +
1345 + return 0;
1346 +}
1347 +
1348 +int dpio_get_irq_status(struct fsl_mc_io *mc_io,
1349 + uint32_t cmd_flags,
1350 + uint16_t token,
1351 + uint8_t irq_index,
1352 + uint32_t *status)
1353 +{
1354 + struct mc_command cmd = { 0 };
1355 + int err;
1356 +
1357 + /* prepare command */
1358 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_IRQ_STATUS,
1359 + cmd_flags,
1360 + token);
1361 + DPIO_CMD_GET_IRQ_STATUS(cmd, irq_index, *status);
1362 +
1363 + /* send command to mc*/
1364 + err = mc_send_command(mc_io, &cmd);
1365 + if (err)
1366 + return err;
1367 +
1368 + /* retrieve response parameters */
1369 + DPIO_RSP_GET_IRQ_STATUS(cmd, *status);
1370 +
1371 + return 0;
1372 +}
1373 +
1374 +int dpio_clear_irq_status(struct fsl_mc_io *mc_io,
1375 + uint32_t cmd_flags,
1376 + uint16_t token,
1377 + uint8_t irq_index,
1378 + uint32_t status)
1379 +{
1380 + struct mc_command cmd = { 0 };
1381 +
1382 + /* prepare command */
1383 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_CLEAR_IRQ_STATUS,
1384 + cmd_flags,
1385 + token);
1386 + DPIO_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status);
1387 +
1388 + /* send command to mc*/
1389 + return mc_send_command(mc_io, &cmd);
1390 +}
1391 +
1392 +int dpio_get_attributes(struct fsl_mc_io *mc_io,
1393 + uint32_t cmd_flags,
1394 + uint16_t token,
1395 + struct dpio_attr *attr)
1396 +{
1397 + struct mc_command cmd = { 0 };
1398 + int err;
1399 +
1400 + /* prepare command */
1401 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_ATTR,
1402 + cmd_flags,
1403 + token);
1404 +
1405 + /* send command to mc*/
1406 + err = mc_send_command(mc_io, &cmd);
1407 + if (err)
1408 + return err;
1409 +
1410 + /* retrieve response parameters */
1411 + DPIO_RSP_GET_ATTR(cmd, attr);
1412 +
1413 + return 0;
1414 +}
1415 +
1416 +int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
1417 + uint32_t cmd_flags,
1418 + uint16_t token,
1419 + uint8_t sdest)
1420 +{
1421 + struct mc_command cmd = { 0 };
1422 +
1423 + /* prepare command */
1424 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_SET_STASHING_DEST,
1425 + cmd_flags,
1426 + token);
1427 + DPIO_CMD_SET_STASHING_DEST(cmd, sdest);
1428 +
1429 + /* send command to mc*/
1430 + return mc_send_command(mc_io, &cmd);
1431 +}
1432 +
1433 +int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
1434 + uint32_t cmd_flags,
1435 + uint16_t token,
1436 + uint8_t *sdest)
1437 +{
1438 + struct mc_command cmd = { 0 };
1439 + int err;
1440 +
1441 + /* prepare command */
1442 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_GET_STASHING_DEST,
1443 + cmd_flags,
1444 + token);
1445 +
1446 + /* send command to mc*/
1447 + err = mc_send_command(mc_io, &cmd);
1448 + if (err)
1449 + return err;
1450 +
1451 + /* retrieve response parameters */
1452 + DPIO_RSP_GET_STASHING_DEST(cmd, *sdest);
1453 +
1454 + return 0;
1455 +}
1456 +
1457 +int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
1458 + uint32_t cmd_flags,
1459 + uint16_t token,
1460 + int dpcon_id,
1461 + uint8_t *channel_index)
1462 +{
1463 + struct mc_command cmd = { 0 };
1464 + int err;
1465 +
1466 + /* prepare command */
1467 + cmd.header = mc_encode_cmd_header(DPIO_CMDID_ADD_STATIC_DEQUEUE_CHANNEL,
1468 + cmd_flags,
1469 + token);
1470 + DPIO_CMD_ADD_STATIC_DEQUEUE_CHANNEL(cmd, dpcon_id);
1471 +
1472 + /* send command to mc*/
1473 + err = mc_send_command(mc_io, &cmd);
1474 + if (err)
1475 + return err;
1476 +
1477 + /* retrieve response parameters */
1478 + DPIO_RSP_ADD_STATIC_DEQUEUE_CHANNEL(cmd, *channel_index);
1479 +
1480 + return 0;
1481 +}
1482 +
1483 +int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io,
1484 + uint32_t cmd_flags,
1485 + uint16_t token,
1486 + int dpcon_id)
1487 +{
1488 + struct mc_command cmd = { 0 };
1489 +
1490 + /* prepare command */
1491 + cmd.header = mc_encode_cmd_header(
1492 + DPIO_CMDID_REMOVE_STATIC_DEQUEUE_CHANNEL,
1493 + cmd_flags,
1494 + token);
1495 + DPIO_CMD_REMOVE_STATIC_DEQUEUE_CHANNEL(cmd, dpcon_id);
1496 +
1497 + /* send command to mc*/
1498 + return mc_send_command(mc_io, &cmd);
1499 +}
1500 --- /dev/null
1501 +++ b/drivers/staging/fsl-mc/bus/dpio/dpio_service.c
1502 @@ -0,0 +1,801 @@
1503 +/* Copyright 2014 Freescale Semiconductor Inc.
1504 + *
1505 + * Redistribution and use in source and binary forms, with or without
1506 + * modification, are permitted provided that the following conditions are met:
1507 + * * Redistributions of source code must retain the above copyright
1508 + * notice, this list of conditions and the following disclaimer.
1509 + * * Redistributions in binary form must reproduce the above copyright
1510 + * notice, this list of conditions and the following disclaimer in the
1511 + * documentation and/or other materials provided with the distribution.
1512 + * * Neither the name of Freescale Semiconductor nor the
1513 + * names of its contributors may be used to endorse or promote products
1514 + * derived from this software without specific prior written permission.
1515 + *
1516 + *
1517 + * ALTERNATIVELY, this software may be distributed under the terms of the
1518 + * GNU General Public License ("GPL") as published by the Free Software
1519 + * Foundation, either version 2 of that License or (at your option) any
1520 + * later version.
1521 + *
1522 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
1523 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
1524 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
1525 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
1526 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
1527 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
1528 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
1529 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
1530 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
1531 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
1532 + */
1533 +#include <linux/types.h>
1534 +#include "fsl_qbman_portal.h"
1535 +#include "../../include/mc.h"
1536 +#include "../../include/fsl_dpaa2_io.h"
1537 +#include "fsl_dpio.h"
1538 +#include <linux/init.h>
1539 +#include <linux/module.h>
1540 +#include <linux/platform_device.h>
1541 +#include <linux/interrupt.h>
1542 +#include <linux/dma-mapping.h>
1543 +#include <linux/slab.h>
1544 +
1545 +#include "dpio-drv.h"
1546 +#include "qbman_debug.h"
1547 +
1548 +#define UNIMPLEMENTED() pr_err("FOO: %s unimplemented!\n", __func__)
1549 +
1550 +#define MAGIC_SERVICE 0xabcd9876
1551 +#define MAGIC_OBJECT 0x1234fedc
1552 +
1553 +struct dpaa2_io {
1554 + /* If MAGIC_SERVICE, this is a group of objects, use the 'service' part
1555 + * of the union. If MAGIC_OBJECT, use the 'object' part of the union. If
1556 + * it's neither, something got corrupted. This is mainly to satisfy
1557 + * dpaa2_io_from_registration(), which dereferences a caller-
1558 + * instantiated struct and so warrants a bug-checking step - hence the
1559 + * magic rather than a boolean.
1560 + */
1561 + unsigned int magic;
1562 + atomic_t refs;
1563 + union {
1564 + struct dpaa2_io_service {
1565 + spinlock_t lock;
1566 + struct list_head list;
1567 + /* for targeted dpaa2_io selection */
1568 + struct dpaa2_io *objects_by_cpu[NR_CPUS];
1569 + cpumask_t cpus_notifications;
1570 + cpumask_t cpus_stashing;
1571 + int has_nonaffine;
1572 + /* slight hack. record the special case of the
1573 + * "default service", because that's the case where we
1574 + * need to avoid a kfree() ... */
1575 + int is_defservice;
1576 + } service;
1577 + struct dpaa2_io_object {
1578 + struct dpaa2_io_desc dpio_desc;
1579 + struct qbman_swp_desc swp_desc;
1580 + struct qbman_swp *swp;
1581 + /* If the object is part of a service, this is it (and
1582 + * 'node' is linked into the service's list) */
1583 + struct dpaa2_io *service;
1584 + struct list_head node;
1585 + /* Interrupt mask, as used with
1586 + * qbman_swp_interrupt_[gs]et_vanish(). This isn't
1587 + * locked, because the higher layer is driving all
1588 + * "ingress" processing. */
1589 + uint32_t irq_mask;
1590 + /* As part of simplifying assumptions, we provide an
1591 + * irq-safe lock for each type of DPIO operation that
1592 + * isn't innately lockless. The selection algorithms
1593 + * (which are simplified) require this, whereas
1594 + * eventually adherence to cpu-affinity will presumably
1595 + * relax the locking requirements. */
1596 + spinlock_t lock_mgmt_cmd;
1597 + spinlock_t lock_notifications;
1598 + struct list_head notifications;
1599 + } object;
1600 + };
1601 +};
1602 +
1603 +struct dpaa2_io_store {
1604 + unsigned int max;
1605 + dma_addr_t paddr;
1606 + struct dpaa2_dq *vaddr;
1607 + void *alloced_addr; /* the actual return from kmalloc as it may
1608 + be adjusted for alignment purposes */
1609 + unsigned int idx; /* position of the next-to-be-returned entry */
1610 + struct qbman_swp *swp; /* portal used to issue VDQCR */
1611 + struct device *dev; /* device used for DMA mapping */
1612 +};
1613 +
1614 +static struct dpaa2_io def_serv;
1615 +
1616 +/**********************/
1617 +/* Internal functions */
1618 +/**********************/
1619 +
1620 +static void service_init(struct dpaa2_io *d, int is_defservice)
1621 +{
1622 + struct dpaa2_io_service *s = &d->service;
1623 +
1624 + d->magic = MAGIC_SERVICE;
1625 + atomic_set(&d->refs, 1);
1626 + spin_lock_init(&s->lock);
1627 + INIT_LIST_HEAD(&s->list);
1628 + cpumask_clear(&s->cpus_notifications);
1629 + cpumask_clear(&s->cpus_stashing);
1630 + s->has_nonaffine = 0;
1631 + s->is_defservice = is_defservice;
1632 +}
1633 +
1634 +/* Selection algorithms, stupid ones at that. These are to handle the case where
1635 + * the given dpaa2_io is a service, by choosing the non-service dpaa2_io within
1636 + * it to use.
1637 + */
1638 +static struct dpaa2_io *_service_select_by_cpu_slow(struct dpaa2_io_service *ss,
1639 + int cpu)
1640 +{
1641 + struct dpaa2_io *o;
1642 + unsigned long irqflags;
1643 +
1644 + spin_lock_irqsave(&ss->lock, irqflags);
1645 + /* TODO: this is about the dumbest and slowest selection algorithm you
1646 + * could imagine. (We're looking for something working first, and
1647 + * something efficient second...)
1648 + */
1649 + list_for_each_entry(o, &ss->list, object.node)
1650 + if (o->object.dpio_desc.cpu == cpu)
1651 + goto found;
1652 +
1653 + /* No joy. Try the first nonaffine portal (bleurgh) */
1654 + if (ss->has_nonaffine)
1655 + list_for_each_entry(o, &ss->list, object.node)
1656 + if (!o->object.dpio_desc.stash_affinity)
1657 + goto found;
1658 +
1659 + /* No joy. Try the first object. Told you it was horrible. */
1660 + if (!list_empty(&ss->list))
1661 + o = list_entry(ss->list.next, struct dpaa2_io, object.node);
1662 + else
1663 + o = NULL;
1664 +
1665 +found:
1666 + spin_unlock_irqrestore(&ss->lock, irqflags);
1667 + return o;
1668 +}
1669 +
1670 +static struct dpaa2_io *service_select_by_cpu(struct dpaa2_io *d, int cpu)
1671 +{
1672 + struct dpaa2_io_service *ss;
1673 + unsigned long irqflags;
1674 +
1675 + if (!d)
1676 + d = &def_serv;
1677 + else if (d->magic == MAGIC_OBJECT)
1678 + return d;
1679 + BUG_ON(d->magic != MAGIC_SERVICE);
1680 +
1681 + ss = &d->service;
1682 +
1683 + /* If cpu==-1, choose the current cpu, with no guarantees about
1684 + * potentially being migrated away.
1685 + */
1686 + if (unlikely(cpu < 0)) {
1687 + spin_lock_irqsave(&ss->lock, irqflags);
1688 + cpu = smp_processor_id();
1689 + spin_unlock_irqrestore(&ss->lock, irqflags);
1690 +
1691 + return _service_select_by_cpu_slow(ss, cpu);
1692 + }
1693 +
1694 + /* If a specific cpu was requested, pick it up immediately */
1695 + return ss->objects_by_cpu[cpu];
1696 +}
1697 +
1698 +static inline struct dpaa2_io *service_select_any(struct dpaa2_io *d)
1699 +{
1700 + struct dpaa2_io_service *ss;
1701 + struct dpaa2_io *o;
1702 + unsigned long irqflags;
1703 +
1704 + if (!d)
1705 + d = &def_serv;
1706 + else if (d->magic == MAGIC_OBJECT)
1707 + return d;
1708 + BUG_ON(d->magic != MAGIC_SERVICE);
1709 +
1710 + /*
1711 + * Lock the service, looking for the first DPIO object in the list,
1712 + * ignore everything else about that DPIO, and choose it to do the
1713 + * operation! As a post-selection step, move the DPIO to the end of
1714 + * the list. It should improve load-balancing a little, although it
1715 + * might also incur a performance hit, given that the lock is *global*
1716 + * and this may be called on the fast-path...
1717 + */
1718 + ss = &d->service;
1719 + spin_lock_irqsave(&ss->lock, irqflags);
1720 + if (!list_empty(&ss->list)) {
1721 + o = list_entry(ss->list.next, struct dpaa2_io, object.node);
1722 + list_del(&o->object.node);
1723 + list_add_tail(&o->object.node, &ss->list);
1724 + } else
1725 + o = NULL;
1726 + spin_unlock_irqrestore(&ss->lock, irqflags);
1727 + return o;
1728 +}
1729 +
1730 +/* If the context is not preemptible, select the service affine to the
1731 + * current cpu. Otherwise, "select any".
1732 + */
1733 +static inline struct dpaa2_io *_service_select(struct dpaa2_io *d)
1734 +{
1735 + struct dpaa2_io *temp = d;
1736 +
1737 + if (likely(!preemptible())) {
1738 + d = service_select_by_cpu(d, smp_processor_id());
1739 + if (likely(d))
1740 + return d;
1741 + }
1742 + return service_select_any(temp);
1743 +}
1744 +
1745 +/**********************/
1746 +/* Exported functions */
1747 +/**********************/
1748 +
1749 +struct dpaa2_io *dpaa2_io_create(const struct dpaa2_io_desc *desc)
1750 +{
1751 + struct dpaa2_io *ret = kmalloc(sizeof(*ret), GFP_KERNEL);
1752 + struct dpaa2_io_object *o = &ret->object;
1753 +
1754 + if (!ret)
1755 + return NULL;
1756 + ret->magic = MAGIC_OBJECT;
1757 + atomic_set(&ret->refs, 1);
1758 + o->dpio_desc = *desc;
1759 + o->swp_desc.cena_bar = o->dpio_desc.regs_cena;
1760 + o->swp_desc.cinh_bar = o->dpio_desc.regs_cinh;
1761 + o->swp_desc.qman_version = o->dpio_desc.qman_version;
1762 + o->swp = qbman_swp_init(&o->swp_desc);
1763 + o->service = NULL;
1764 + if (!o->swp) {
1765 + kfree(ret);
1766 + return NULL;
1767 + }
1768 + INIT_LIST_HEAD(&o->node);
1769 + spin_lock_init(&o->lock_mgmt_cmd);
1770 + spin_lock_init(&o->lock_notifications);
1771 + INIT_LIST_HEAD(&o->notifications);
1772 + if (!o->dpio_desc.has_irq)
1773 + qbman_swp_interrupt_set_vanish(o->swp, 0xffffffff);
1774 + else {
1775 + /* For now only enable DQRR interrupts */
1776 + qbman_swp_interrupt_set_trigger(o->swp,
1777 + QBMAN_SWP_INTERRUPT_DQRI);
1778 + }
1779 + qbman_swp_interrupt_clear_status(o->swp, 0xffffffff);
1780 + if (o->dpio_desc.receives_notifications)
1781 + qbman_swp_push_set(o->swp, 0, 1);
1782 + return ret;
1783 +}
1784 +EXPORT_SYMBOL(dpaa2_io_create);
1785 +
1786 +struct dpaa2_io *dpaa2_io_create_service(void)
1787 +{
1788 + struct dpaa2_io *ret = kmalloc(sizeof(*ret), GFP_KERNEL);
1789 +
1790 + if (ret)
1791 + service_init(ret, 0);
1792 + return ret;
1793 +}
1794 +EXPORT_SYMBOL(dpaa2_io_create_service);
1795 +
1796 +struct dpaa2_io *dpaa2_io_default_service(void)
1797 +{
1798 + atomic_inc(&def_serv.refs);
1799 + return &def_serv;
1800 +}
1801 +EXPORT_SYMBOL(dpaa2_io_default_service);
1802 +
1803 +void dpaa2_io_down(struct dpaa2_io *d)
1804 +{
1805 + if (!atomic_dec_and_test(&d->refs))
1806 + return;
1807 + if (d->magic == MAGIC_SERVICE) {
1808 + BUG_ON(!list_empty(&d->service.list));
1809 + if (d->service.is_defservice)
1810 + /* avoid the kfree()! */
1811 + return;
1812 + } else {
1813 + BUG_ON(d->magic != MAGIC_OBJECT);
1814 + BUG_ON(d->object.service);
1815 + BUG_ON(!list_empty(&d->object.notifications));
1816 + }
1817 + kfree(d);
1818 +}
1819 +EXPORT_SYMBOL(dpaa2_io_down);
1820 +
1821 +int dpaa2_io_service_add(struct dpaa2_io *s, struct dpaa2_io *o)
1822 +{
1823 + struct dpaa2_io_service *ss = &s->service;
1824 + struct dpaa2_io_object *oo = &o->object;
1825 + int res = -EINVAL;
1826 +
1827 + if ((s->magic != MAGIC_SERVICE) || (o->magic != MAGIC_OBJECT))
1828 + return res;
1829 + atomic_inc(&o->refs);
1830 + atomic_inc(&s->refs);
1831 + spin_lock(&ss->lock);
1832 + /* 'obj' must not already be associated with a service */
1833 + if (!oo->service) {
1834 + oo->service = s;
1835 + list_add(&oo->node, &ss->list);
1836 + if (oo->dpio_desc.receives_notifications) {
1837 + cpumask_set_cpu(oo->dpio_desc.cpu,
1838 + &ss->cpus_notifications);
1839 + /* Update the fast-access array */
1840 + ss->objects_by_cpu[oo->dpio_desc.cpu] =
1841 + container_of(oo, struct dpaa2_io, object);
1842 + }
1843 + if (oo->dpio_desc.stash_affinity)
1844 + cpumask_set_cpu(oo->dpio_desc.cpu,
1845 + &ss->cpus_stashing);
1846 + if (!oo->dpio_desc.stash_affinity)
1847 + ss->has_nonaffine = 1;
1848 + /* success */
1849 + res = 0;
1850 + }
1851 + spin_unlock(&ss->lock);
1852 + if (res) {
1853 + dpaa2_io_down(s);
1854 + dpaa2_io_down(o);
1855 + }
1856 + return res;
1857 +}
1858 +EXPORT_SYMBOL(dpaa2_io_service_add);
1859 +
1860 +int dpaa2_io_get_descriptor(struct dpaa2_io *obj, struct dpaa2_io_desc *desc)
1861 +{
1862 + if (obj->magic == MAGIC_SERVICE)
1863 + return -EINVAL;
1864 + BUG_ON(obj->magic != MAGIC_OBJECT);
1865 + *desc = obj->object.dpio_desc;
1866 + return 0;
1867 +}
1868 +EXPORT_SYMBOL(dpaa2_io_get_descriptor);
1869 +
1870 +#define DPAA_POLL_MAX 32
1871 +
1872 +int dpaa2_io_poll(struct dpaa2_io *obj)
1873 +{
1874 + const struct dpaa2_dq *dq;
1875 + struct qbman_swp *swp;
1876 + int max = 0;
1877 +
1878 + if (obj->magic != MAGIC_OBJECT)
1879 + return -EINVAL;
1880 + swp = obj->object.swp;
1881 + dq = qbman_swp_dqrr_next(swp);
1882 + while (dq) {
1883 + if (qbman_result_is_SCN(dq)) {
1884 + struct dpaa2_io_notification_ctx *ctx;
1885 + uint64_t q64;
1886 +
1887 + q64 = qbman_result_SCN_ctx(dq);
1888 + ctx = (void *)q64;
1889 + ctx->cb(ctx);
1890 + } else
1891 + pr_crit("Unrecognised/ignored DQRR entry\n");
1892 + qbman_swp_dqrr_consume(swp, dq);
1893 + ++max;
1894 + if (max > DPAA_POLL_MAX)
1895 + return 0;
1896 + dq = qbman_swp_dqrr_next(swp);
1897 + }
1898 + return 0;
1899 +}
1900 +EXPORT_SYMBOL(dpaa2_io_poll);
1901 +
1902 +int dpaa2_io_irq(struct dpaa2_io *obj)
1903 +{
1904 + struct qbman_swp *swp;
1905 + uint32_t status;
1906 +
1907 + if (obj->magic != MAGIC_OBJECT)
1908 + return -EINVAL;
1909 + swp = obj->object.swp;
1910 + status = qbman_swp_interrupt_read_status(swp);
1911 + if (!status)
1912 + return IRQ_NONE;
1913 + dpaa2_io_poll(obj);
1914 + qbman_swp_interrupt_clear_status(swp, status);
1915 + qbman_swp_interrupt_set_inhibit(swp, 0);
1916 + return IRQ_HANDLED;
1917 +}
1918 +EXPORT_SYMBOL(dpaa2_io_irq);
1919 +
1920 +int dpaa2_io_pause_poll(struct dpaa2_io *obj)
1921 +{
1922 + UNIMPLEMENTED();
1923 + return -EINVAL;
1924 +}
1925 +EXPORT_SYMBOL(dpaa2_io_pause_poll);
1926 +
1927 +int dpaa2_io_resume_poll(struct dpaa2_io *obj)
1928 +{
1929 + UNIMPLEMENTED();
1930 + return -EINVAL;
1931 +}
1932 +EXPORT_SYMBOL(dpaa2_io_resume_poll);
1933 +
1934 +void dpaa2_io_service_notifications(struct dpaa2_io *s, cpumask_t *mask)
1935 +{
1936 + struct dpaa2_io_service *ss = &s->service;
1937 +
1938 + BUG_ON(s->magic != MAGIC_SERVICE);
1939 + cpumask_copy(mask, &ss->cpus_notifications);
1940 +}
1941 +EXPORT_SYMBOL(dpaa2_io_service_notifications);
1942 +
1943 +void dpaa2_io_service_stashing(struct dpaa2_io *s, cpumask_t *mask)
1944 +{
1945 + struct dpaa2_io_service *ss = &s->service;
1946 +
1947 + BUG_ON(s->magic != MAGIC_SERVICE);
1948 + cpumask_copy(mask, &ss->cpus_stashing);
1949 +}
1950 +EXPORT_SYMBOL(dpaa2_io_service_stashing);
1951 +
1952 +int dpaa2_io_service_has_nonaffine(struct dpaa2_io *s)
1953 +{
1954 + struct dpaa2_io_service *ss = &s->service;
1955 +
1956 + BUG_ON(s->magic != MAGIC_SERVICE);
1957 + return ss->has_nonaffine;
1958 +}
1959 +EXPORT_SYMBOL(dpaa2_io_service_has_nonaffine);
1960 +
1961 +int dpaa2_io_service_register(struct dpaa2_io *d,
1962 + struct dpaa2_io_notification_ctx *ctx)
1963 +{
1964 + unsigned long irqflags;
1965 +
1966 + d = service_select_by_cpu(d, ctx->desired_cpu);
1967 + if (!d)
1968 + return -ENODEV;
1969 + ctx->dpio_id = d->object.dpio_desc.dpio_id;
1970 + ctx->qman64 = (uint64_t)ctx;
1971 + ctx->dpio_private = d;
1972 + spin_lock_irqsave(&d->object.lock_notifications, irqflags);
1973 + list_add(&ctx->node, &d->object.notifications);
1974 + spin_unlock_irqrestore(&d->object.lock_notifications, irqflags);
1975 + if (ctx->is_cdan)
1976 + /* Enable the generation of CDAN notifications */
1977 + qbman_swp_CDAN_set_context_enable(d->object.swp,
1978 + (uint16_t)ctx->id,
1979 + ctx->qman64);
1980 + return 0;
1981 +}
1982 +EXPORT_SYMBOL(dpaa2_io_service_register);
1983 +
1984 +int dpaa2_io_service_deregister(struct dpaa2_io *service,
1985 + struct dpaa2_io_notification_ctx *ctx)
1986 +{
1987 + struct dpaa2_io *d = ctx->dpio_private;
1988 + unsigned long irqflags;
1989 +
1990 + if (!service)
1991 + service = &def_serv;
1992 + BUG_ON((service != d) && (service != d->object.service));
1993 + if (ctx->is_cdan)
1994 + qbman_swp_CDAN_disable(d->object.swp,
1995 + (uint16_t)ctx->id);
1996 + spin_lock_irqsave(&d->object.lock_notifications, irqflags);
1997 + list_del(&ctx->node);
1998 + spin_unlock_irqrestore(&d->object.lock_notifications, irqflags);
1999 + return 0;
2000 +}
2001 +EXPORT_SYMBOL(dpaa2_io_service_deregister);
2002 +
2003 +int dpaa2_io_service_rearm(struct dpaa2_io *d,
2004 + struct dpaa2_io_notification_ctx *ctx)
2005 +{
2006 + unsigned long irqflags;
2007 + int err;
2008 +
2009 + d = _service_select(d);
2010 + if (!d)
2011 + return -ENODEV;
2012 + spin_lock_irqsave(&d->object.lock_mgmt_cmd, irqflags);
2013 + if (ctx->is_cdan)
2014 + err = qbman_swp_CDAN_enable(d->object.swp, (uint16_t)ctx->id);
2015 + else
2016 + err = qbman_swp_fq_schedule(d->object.swp, ctx->id);
2017 + spin_unlock_irqrestore(&d->object.lock_mgmt_cmd, irqflags);
2018 + return err;
2019 +}
2020 +EXPORT_SYMBOL(dpaa2_io_service_rearm);
2021 +
2022 +int dpaa2_io_from_registration(struct dpaa2_io_notification_ctx *ctx,
2023 + struct dpaa2_io **io)
2024 +{
2025 + struct dpaa2_io_notification_ctx *tmp;
2026 + struct dpaa2_io *d = ctx->dpio_private;
2027 + unsigned long irqflags;
2028 + int ret = 0;
2029 +
2030 + BUG_ON(d->magic != MAGIC_OBJECT);
2031 + /* Iterate the notifications associated with 'd' looking for a match. If
2032 + * not, we've been passed an unregistered ctx! */
2033 + spin_lock_irqsave(&d->object.lock_notifications, irqflags);
2034 + list_for_each_entry(tmp, &d->object.notifications, node)
2035 + if (tmp == ctx)
2036 + goto found;
2037 + ret = -EINVAL;
2038 +found:
2039 + spin_unlock_irqrestore(&d->object.lock_notifications, irqflags);
2040 + if (!ret) {
2041 + atomic_inc(&d->refs);
2042 + *io = d;
2043 + }
2044 + return ret;
2045 +}
2046 +EXPORT_SYMBOL(dpaa2_io_from_registration);
2047 +
2048 +int dpaa2_io_service_get_persistent(struct dpaa2_io *service, int cpu,
2049 + struct dpaa2_io **ret)
2050 +{
2051 + if (cpu == -1)
2052 + *ret = service_select_any(service);
2053 + else
2054 + *ret = service_select_by_cpu(service, cpu);
2055 + if (*ret) {
2056 + atomic_inc(&(*ret)->refs);
2057 + return 0;
2058 + }
2059 + return -ENODEV;
2060 +}
2061 +EXPORT_SYMBOL(dpaa2_io_service_get_persistent);
2062 +
2063 +int dpaa2_io_service_pull_fq(struct dpaa2_io *d, uint32_t fqid,
2064 + struct dpaa2_io_store *s)
2065 +{
2066 + struct qbman_pull_desc pd;
2067 + int err;
2068 +
2069 + qbman_pull_desc_clear(&pd);
2070 + qbman_pull_desc_set_storage(&pd, s->vaddr, s->paddr, 1);
2071 + qbman_pull_desc_set_numframes(&pd, (uint8_t)s->max);
2072 + qbman_pull_desc_set_fq(&pd, fqid);
2073 + d = _service_select(d);
2074 + if (!d)
2075 + return -ENODEV;
2076 + s->swp = d->object.swp;
2077 + err = qbman_swp_pull(d->object.swp, &pd);
2078 + if (err)
2079 + s->swp = NULL;
2080 + return err;
2081 +}
2082 +EXPORT_SYMBOL(dpaa2_io_service_pull_fq);
2083 +
2084 +int dpaa2_io_service_pull_channel(struct dpaa2_io *d, uint32_t channelid,
2085 + struct dpaa2_io_store *s)
2086 +{
2087 + struct qbman_pull_desc pd;
2088 + int err;
2089 +
2090 + qbman_pull_desc_clear(&pd);
2091 + qbman_pull_desc_set_storage(&pd, s->vaddr, s->paddr, 1);
2092 + qbman_pull_desc_set_numframes(&pd, (uint8_t)s->max);
2093 + qbman_pull_desc_set_channel(&pd, channelid, qbman_pull_type_prio);
2094 + d = _service_select(d);
2095 + if (!d)
2096 + return -ENODEV;
2097 + s->swp = d->object.swp;
2098 + err = qbman_swp_pull(d->object.swp, &pd);
2099 + if (err)
2100 + s->swp = NULL;
2101 + return err;
2102 +}
2103 +EXPORT_SYMBOL(dpaa2_io_service_pull_channel);
2104 +
2105 +int dpaa2_io_service_enqueue_fq(struct dpaa2_io *d,
2106 + uint32_t fqid,
2107 + const struct dpaa2_fd *fd)
2108 +{
2109 + struct qbman_eq_desc ed;
2110 +
2111 + d = _service_select(d);
2112 + if (!d)
2113 + return -ENODEV;
2114 + qbman_eq_desc_clear(&ed);
2115 + qbman_eq_desc_set_no_orp(&ed, 0);
2116 + qbman_eq_desc_set_fq(&ed, fqid);
2117 + return qbman_swp_enqueue(d->object.swp, &ed,
2118 + (const struct qbman_fd *)fd);
2119 +}
2120 +EXPORT_SYMBOL(dpaa2_io_service_enqueue_fq);
2121 +
2122 +int dpaa2_io_service_enqueue_qd(struct dpaa2_io *d,
2123 + uint32_t qdid, uint8_t prio, uint16_t qdbin,
2124 + const struct dpaa2_fd *fd)
2125 +{
2126 + struct qbman_eq_desc ed;
2127 +
2128 + d = _service_select(d);
2129 + if (!d)
2130 + return -ENODEV;
2131 + qbman_eq_desc_clear(&ed);
2132 + qbman_eq_desc_set_no_orp(&ed, 0);
2133 + qbman_eq_desc_set_qd(&ed, qdid, qdbin, prio);
2134 + return qbman_swp_enqueue(d->object.swp, &ed,
2135 + (const struct qbman_fd *)fd);
2136 +}
2137 +EXPORT_SYMBOL(dpaa2_io_service_enqueue_qd);
2138 +
2139 +int dpaa2_io_service_release(struct dpaa2_io *d,
2140 + uint32_t bpid,
2141 + const uint64_t *buffers,
2142 + unsigned int num_buffers)
2143 +{
2144 + struct qbman_release_desc rd;
2145 +
2146 + d = _service_select(d);
2147 + if (!d)
2148 + return -ENODEV;
2149 + qbman_release_desc_clear(&rd);
2150 + qbman_release_desc_set_bpid(&rd, bpid);
2151 + return qbman_swp_release(d->object.swp, &rd, buffers, num_buffers);
2152 +}
2153 +EXPORT_SYMBOL(dpaa2_io_service_release);
2154 +
2155 +int dpaa2_io_service_acquire(struct dpaa2_io *d,
2156 + uint32_t bpid,
2157 + uint64_t *buffers,
2158 + unsigned int num_buffers)
2159 +{
2160 + unsigned long irqflags;
2161 + int err;
2162 +
2163 + d = _service_select(d);
2164 + if (!d)
2165 + return -ENODEV;
2166 + spin_lock_irqsave(&d->object.lock_mgmt_cmd, irqflags);
2167 + err = qbman_swp_acquire(d->object.swp, bpid, buffers, num_buffers);
2168 + spin_unlock_irqrestore(&d->object.lock_mgmt_cmd, irqflags);
2169 + return err;
2170 +}
2171 +EXPORT_SYMBOL(dpaa2_io_service_acquire);
2172 +
2173 +struct dpaa2_io_store *dpaa2_io_store_create(unsigned int max_frames,
2174 + struct device *dev)
2175 +{
2176 + struct dpaa2_io_store *ret = kmalloc(sizeof(*ret), GFP_KERNEL);
2177 + size_t size;
2178 +
2179 + BUG_ON(!max_frames || (max_frames > 16));
2180 + if (!ret)
2181 + return NULL;
2182 + ret->max = max_frames;
2183 + size = max_frames * sizeof(struct dpaa2_dq) + 64;
2184 + ret->alloced_addr = kmalloc(size, GFP_KERNEL);
2185 + if (!ret->alloced_addr) {
2186 + kfree(ret);
2187 + return NULL;
2188 + }
2189 + ret->vaddr = PTR_ALIGN(ret->alloced_addr, 64);
2190 + ret->paddr = dma_map_single(dev, ret->vaddr,
2191 + sizeof(struct dpaa2_dq) * max_frames,
2192 + DMA_FROM_DEVICE);
2193 + if (dma_mapping_error(dev, ret->paddr)) {
2194 + kfree(ret->alloced_addr);
2195 + kfree(ret);
2196 + return NULL;
2197 + }
2198 + ret->idx = 0;
2199 + ret->dev = dev;
2200 + return ret;
2201 +}
2202 +EXPORT_SYMBOL(dpaa2_io_store_create);
2203 +
2204 +void dpaa2_io_store_destroy(struct dpaa2_io_store *s)
2205 +{
2206 + dma_unmap_single(s->dev, s->paddr, sizeof(struct dpaa2_dq) * s->max,
2207 + DMA_FROM_DEVICE);
2208 + kfree(s->alloced_addr);
2209 + kfree(s);
2210 +}
2211 +EXPORT_SYMBOL(dpaa2_io_store_destroy);
2212 +
2213 +struct dpaa2_dq *dpaa2_io_store_next(struct dpaa2_io_store *s, int *is_last)
2214 +{
2215 + int match;
2216 + struct dpaa2_dq *ret = &s->vaddr[s->idx];
2217 +
2218 + match = qbman_result_has_new_result(s->swp, ret);
2219 + if (!match) {
2220 + *is_last = 0;
2221 + return NULL;
2222 + }
2223 + BUG_ON(!qbman_result_is_DQ(ret));
2224 + s->idx++;
2225 + if (dpaa2_dq_is_pull_complete(ret)) {
2226 + *is_last = 1;
2227 + s->idx = 0;
2228 + /* If we get an empty dequeue result to terminate a zero-results
2229 + * vdqcr, return NULL to the caller rather than expecting him to
2230 + * check non-NULL results every time. */
2231 + if (!(dpaa2_dq_flags(ret) & DPAA2_DQ_STAT_VALIDFRAME))
2232 + ret = NULL;
2233 + } else
2234 + *is_last = 0;
2235 + return ret;
2236 +}
2237 +EXPORT_SYMBOL(dpaa2_io_store_next);
2238 +
2239 +#ifdef CONFIG_FSL_QBMAN_DEBUG
2240 +int dpaa2_io_query_fq_count(struct dpaa2_io *d, uint32_t fqid,
2241 + uint32_t *fcnt, uint32_t *bcnt)
2242 +{
2243 + struct qbman_attr state;
2244 + struct qbman_swp *swp;
2245 + unsigned long irqflags;
2246 + int ret;
2247 +
2248 + d = service_select_any(d);
2249 + if (!d)
2250 + return -ENODEV;
2251 +
2252 + swp = d->object.swp;
2253 + spin_lock_irqsave(&d->object.lock_mgmt_cmd, irqflags);
2254 + ret = qbman_fq_query_state(swp, fqid, &state);
2255 + spin_unlock_irqrestore(&d->object.lock_mgmt_cmd, irqflags);
2256 + if (ret)
2257 + return ret;
2258 + *fcnt = qbman_fq_state_frame_count(&state);
2259 + *bcnt = qbman_fq_state_byte_count(&state);
2260 +
2261 + return 0;
2262 +}
2263 +EXPORT_SYMBOL(dpaa2_io_query_fq_count);
2264 +
2265 +int dpaa2_io_query_bp_count(struct dpaa2_io *d, uint32_t bpid,
2266 + uint32_t *num)
2267 +{
2268 + struct qbman_attr state;
2269 + struct qbman_swp *swp;
2270 + unsigned long irqflags;
2271 + int ret;
2272 +
2273 + d = service_select_any(d);
2274 + if (!d)
2275 + return -ENODEV;
2276 +
2277 + swp = d->object.swp;
2278 + spin_lock_irqsave(&d->object.lock_mgmt_cmd, irqflags);
2279 + ret = qbman_bp_query(swp, bpid, &state);
2280 + spin_unlock_irqrestore(&d->object.lock_mgmt_cmd, irqflags);
2281 + if (ret)
2282 + return ret;
2283 + *num = qbman_bp_info_num_free_bufs(&state);
2284 + return 0;
2285 +}
2286 +EXPORT_SYMBOL(dpaa2_io_query_bp_count);
2287 +
2288 +#endif
2289 +
2290 +/* module init/exit hooks called from dpio-drv.c. These are declared in
2291 + * dpio-drv.h.
2292 + */
2293 +int dpaa2_io_service_driver_init(void)
2294 +{
2295 + service_init(&def_serv, 1);
2296 + return 0;
2297 +}
2298 +
2299 +void dpaa2_io_service_driver_exit(void)
2300 +{
2301 + if (atomic_read(&def_serv.refs) != 1)
2302 + pr_err("default DPIO service leaves dangling DPIO objects!\n");
2303 +}
2304 --- /dev/null
2305 +++ b/drivers/staging/fsl-mc/bus/dpio/fsl_dpio.h
2306 @@ -0,0 +1,460 @@
2307 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
2308 + *
2309 + * Redistribution and use in source and binary forms, with or without
2310 + * modification, are permitted provided that the following conditions are met:
2311 + * * Redistributions of source code must retain the above copyright
2312 + * notice, this list of conditions and the following disclaimer.
2313 + * * Redistributions in binary form must reproduce the above copyright
2314 + * notice, this list of conditions and the following disclaimer in the
2315 + * documentation and/or other materials provided with the distribution.
2316 + * * Neither the name of the above-listed copyright holders nor the
2317 + * names of any contributors may be used to endorse or promote products
2318 + * derived from this software without specific prior written permission.
2319 + *
2320 + *
2321 + * ALTERNATIVELY, this software may be distributed under the terms of the
2322 + * GNU General Public License ("GPL") as published by the Free Software
2323 + * Foundation, either version 2 of that License or (at your option) any
2324 + * later version.
2325 + *
2326 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
2327 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
2328 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
2329 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
2330 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
2331 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
2332 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
2333 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
2334 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
2335 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
2336 + * POSSIBILITY OF SUCH DAMAGE.
2337 + */
2338 +#ifndef __FSL_DPIO_H
2339 +#define __FSL_DPIO_H
2340 +
2341 +/* Data Path I/O Portal API
2342 + * Contains initialization APIs and runtime control APIs for DPIO
2343 + */
2344 +
2345 +struct fsl_mc_io;
2346 +
2347 +/**
2348 + * dpio_open() - Open a control session for the specified object
2349 + * @mc_io: Pointer to MC portal's I/O object
2350 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2351 + * @dpio_id: DPIO unique ID
2352 + * @token: Returned token; use in subsequent API calls
2353 + *
2354 + * This function can be used to open a control session for an
2355 + * already created object; an object may have been declared in
2356 + * the DPL or by calling the dpio_create() function.
2357 + * This function returns a unique authentication token,
2358 + * associated with the specific object ID and the specific MC
2359 + * portal; this token must be used in all subsequent commands for
2360 + * this specific object.
2361 + *
2362 + * Return: '0' on Success; Error code otherwise.
2363 + */
2364 +int dpio_open(struct fsl_mc_io *mc_io,
2365 + uint32_t cmd_flags,
2366 + int dpio_id,
2367 + uint16_t *token);
2368 +
2369 +/**
2370 + * dpio_close() - Close the control session of the object
2371 + * @mc_io: Pointer to MC portal's I/O object
2372 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2373 + * @token: Token of DPIO object
2374 + *
2375 + * Return: '0' on Success; Error code otherwise.
2376 + */
2377 +int dpio_close(struct fsl_mc_io *mc_io,
2378 + uint32_t cmd_flags,
2379 + uint16_t token);
2380 +
2381 +/**
2382 + * enum dpio_channel_mode - DPIO notification channel mode
2383 + * @DPIO_NO_CHANNEL: No support for notification channel
2384 + * @DPIO_LOCAL_CHANNEL: Notifications on data availability can be received by a
2385 + * dedicated channel in the DPIO; user should point the queue's
2386 + * destination in the relevant interface to this DPIO
2387 + */
2388 +enum dpio_channel_mode {
2389 + DPIO_NO_CHANNEL = 0,
2390 + DPIO_LOCAL_CHANNEL = 1,
2391 +};
2392 +
2393 +/**
2394 + * struct dpio_cfg - Structure representing DPIO configuration
2395 + * @channel_mode: Notification channel mode
2396 + * @num_priorities: Number of priorities for the notification channel (1-8);
2397 + * relevant only if 'channel_mode = DPIO_LOCAL_CHANNEL'
2398 + */
2399 +struct dpio_cfg {
2400 + enum dpio_channel_mode channel_mode;
2401 + uint8_t num_priorities;
2402 +};
2403 +
2404 +/**
2405 + * dpio_create() - Create the DPIO object.
2406 + * @mc_io: Pointer to MC portal's I/O object
2407 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2408 + * @cfg: Configuration structure
2409 + * @token: Returned token; use in subsequent API calls
2410 + *
2411 + * Create the DPIO object, allocate required resources and
2412 + * perform required initialization.
2413 + *
2414 + * The object can be created either by declaring it in the
2415 + * DPL file, or by calling this function.
2416 + *
2417 + * This function returns a unique authentication token,
2418 + * associated with the specific object ID and the specific MC
2419 + * portal; this token must be used in all subsequent calls to
2420 + * this specific object. For objects that are created using the
2421 + * DPL file, call dpio_open() function to get an authentication
2422 + * token first.
2423 + *
2424 + * Return: '0' on Success; Error code otherwise.
2425 + */
2426 +int dpio_create(struct fsl_mc_io *mc_io,
2427 + uint32_t cmd_flags,
2428 + const struct dpio_cfg *cfg,
2429 + uint16_t *token);
2430 +
2431 +/**
2432 + * dpio_destroy() - Destroy the DPIO object and release all its resources.
2433 + * @mc_io: Pointer to MC portal's I/O object
2434 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2435 + * @token: Token of DPIO object
2436 + *
2437 + * Return: '0' on Success; Error code otherwise
2438 + */
2439 +int dpio_destroy(struct fsl_mc_io *mc_io,
2440 + uint32_t cmd_flags,
2441 + uint16_t token);
2442 +
2443 +/**
2444 + * dpio_enable() - Enable the DPIO, allow I/O portal operations.
2445 + * @mc_io: Pointer to MC portal's I/O object
2446 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2447 + * @token: Token of DPIO object
2448 + *
2449 + * Return: '0' on Success; Error code otherwise
2450 + */
2451 +int dpio_enable(struct fsl_mc_io *mc_io,
2452 + uint32_t cmd_flags,
2453 + uint16_t token);
2454 +
2455 +/**
2456 + * dpio_disable() - Disable the DPIO, stop any I/O portal operation.
2457 + * @mc_io: Pointer to MC portal's I/O object
2458 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2459 + * @token: Token of DPIO object
2460 + *
2461 + * Return: '0' on Success; Error code otherwise
2462 + */
2463 +int dpio_disable(struct fsl_mc_io *mc_io,
2464 + uint32_t cmd_flags,
2465 + uint16_t token);
2466 +
2467 +/**
2468 + * dpio_is_enabled() - Check if the DPIO is enabled.
2469 + * @mc_io: Pointer to MC portal's I/O object
2470 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2471 + * @token: Token of DPIO object
2472 + * @en: Returns '1' if object is enabled; '0' otherwise
2473 + *
2474 + * Return: '0' on Success; Error code otherwise.
2475 + */
2476 +int dpio_is_enabled(struct fsl_mc_io *mc_io,
2477 + uint32_t cmd_flags,
2478 + uint16_t token,
2479 + int *en);
2480 +
2481 +/**
2482 + * dpio_reset() - Reset the DPIO, returns the object to initial state.
2483 + * @mc_io: Pointer to MC portal's I/O object
2484 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2485 + * @token: Token of DPIO object
2486 + *
2487 + * Return: '0' on Success; Error code otherwise.
2488 + */
2489 +int dpio_reset(struct fsl_mc_io *mc_io,
2490 + uint32_t cmd_flags,
2491 + uint16_t token);
2492 +
2493 +/**
2494 + * dpio_set_stashing_destination() - Set the stashing destination.
2495 + * @mc_io: Pointer to MC portal's I/O object
2496 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2497 + * @token: Token of DPIO object
2498 + * @sdest: stashing destination value
2499 + *
2500 + * Return: '0' on Success; Error code otherwise.
2501 + */
2502 +int dpio_set_stashing_destination(struct fsl_mc_io *mc_io,
2503 + uint32_t cmd_flags,
2504 + uint16_t token,
2505 + uint8_t sdest);
2506 +
2507 +/**
2508 + * dpio_get_stashing_destination() - Get the stashing destination..
2509 + * @mc_io: Pointer to MC portal's I/O object
2510 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2511 + * @token: Token of DPIO object
2512 + * @sdest: Returns the stashing destination value
2513 + *
2514 + * Return: '0' on Success; Error code otherwise.
2515 + */
2516 +int dpio_get_stashing_destination(struct fsl_mc_io *mc_io,
2517 + uint32_t cmd_flags,
2518 + uint16_t token,
2519 + uint8_t *sdest);
2520 +
2521 +/**
2522 + * dpio_add_static_dequeue_channel() - Add a static dequeue channel.
2523 + * @mc_io: Pointer to MC portal's I/O object
2524 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2525 + * @token: Token of DPIO object
2526 + * @dpcon_id: DPCON object ID
2527 + * @channel_index: Returned channel index to be used in qbman API
2528 + *
2529 + * Return: '0' on Success; Error code otherwise.
2530 + */
2531 +int dpio_add_static_dequeue_channel(struct fsl_mc_io *mc_io,
2532 + uint32_t cmd_flags,
2533 + uint16_t token,
2534 + int dpcon_id,
2535 + uint8_t *channel_index);
2536 +
2537 +/**
2538 + * dpio_remove_static_dequeue_channel() - Remove a static dequeue channel.
2539 + * @mc_io: Pointer to MC portal's I/O object
2540 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2541 + * @token: Token of DPIO object
2542 + * @dpcon_id: DPCON object ID
2543 + *
2544 + * Return: '0' on Success; Error code otherwise.
2545 + */
2546 +int dpio_remove_static_dequeue_channel(struct fsl_mc_io *mc_io,
2547 + uint32_t cmd_flags,
2548 + uint16_t token,
2549 + int dpcon_id);
2550 +
2551 +/**
2552 + * DPIO IRQ Index and Events
2553 + */
2554 +
2555 +/**
2556 + * Irq software-portal index
2557 + */
2558 +#define DPIO_IRQ_SWP_INDEX 0
2559 +
2560 +/**
2561 + * struct dpio_irq_cfg - IRQ configuration
2562 + * @addr: Address that must be written to signal a message-based interrupt
2563 + * @val: Value to write into irq_addr address
2564 + * @irq_num: A user defined number associated with this IRQ
2565 + */
2566 +struct dpio_irq_cfg {
2567 + uint64_t addr;
2568 + uint32_t val;
2569 + int irq_num;
2570 +};
2571 +
2572 +/**
2573 + * dpio_set_irq() - Set IRQ information for the DPIO to trigger an interrupt.
2574 + * @mc_io: Pointer to MC portal's I/O object
2575 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2576 + * @token: Token of DPIO object
2577 + * @irq_index: Identifies the interrupt index to configure
2578 + * @irq_cfg: IRQ configuration
2579 + *
2580 + * Return: '0' on Success; Error code otherwise.
2581 + */
2582 +int dpio_set_irq(struct fsl_mc_io *mc_io,
2583 + uint32_t cmd_flags,
2584 + uint16_t token,
2585 + uint8_t irq_index,
2586 + struct dpio_irq_cfg *irq_cfg);
2587 +
2588 +/**
2589 + * dpio_get_irq() - Get IRQ information from the DPIO.
2590 + *
2591 + * @mc_io: Pointer to MC portal's I/O object
2592 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2593 + * @token: Token of DPIO object
2594 + * @irq_index: The interrupt index to configure
2595 + * @type: Interrupt type: 0 represents message interrupt
2596 + * type (both irq_addr and irq_val are valid)
2597 + * @irq_cfg: IRQ attributes
2598 + *
2599 + * Return: '0' on Success; Error code otherwise.
2600 + */
2601 +int dpio_get_irq(struct fsl_mc_io *mc_io,
2602 + uint32_t cmd_flags,
2603 + uint16_t token,
2604 + uint8_t irq_index,
2605 + int *type,
2606 + struct dpio_irq_cfg *irq_cfg);
2607 +
2608 +/**
2609 + * dpio_set_irq_enable() - Set overall interrupt state.
2610 + * @mc_io: Pointer to MC portal's I/O object
2611 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2612 + * @token: Token of DPIO object
2613 + * @irq_index: The interrupt index to configure
2614 + * @en: Interrupt state - enable = 1, disable = 0
2615 + *
2616 + * Allows GPP software to control when interrupts are generated.
2617 + * Each interrupt can have up to 32 causes. The enable/disable control's the
2618 + * overall interrupt state. if the interrupt is disabled no causes will cause
2619 + * an interrupt.
2620 + *
2621 + * Return: '0' on Success; Error code otherwise.
2622 + */
2623 +int dpio_set_irq_enable(struct fsl_mc_io *mc_io,
2624 + uint32_t cmd_flags,
2625 + uint16_t token,
2626 + uint8_t irq_index,
2627 + uint8_t en);
2628 +
2629 +/**
2630 + * dpio_get_irq_enable() - Get overall interrupt state
2631 + * @mc_io: Pointer to MC portal's I/O object
2632 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2633 + * @token: Token of DPIO object
2634 + * @irq_index: The interrupt index to configure
2635 + * @en: Returned interrupt state - enable = 1, disable = 0
2636 + *
2637 + * Return: '0' on Success; Error code otherwise.
2638 + */
2639 +int dpio_get_irq_enable(struct fsl_mc_io *mc_io,
2640 + uint32_t cmd_flags,
2641 + uint16_t token,
2642 + uint8_t irq_index,
2643 + uint8_t *en);
2644 +
2645 +/**
2646 + * dpio_set_irq_mask() - Set interrupt mask.
2647 + * @mc_io: Pointer to MC portal's I/O object
2648 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2649 + * @token: Token of DPIO object
2650 + * @irq_index: The interrupt index to configure
2651 + * @mask: event mask to trigger interrupt;
2652 + * each bit:
2653 + * 0 = ignore event
2654 + * 1 = consider event for asserting IRQ
2655 + *
2656 + * Every interrupt can have up to 32 causes and the interrupt model supports
2657 + * masking/unmasking each cause independently
2658 + *
2659 + * Return: '0' on Success; Error code otherwise.
2660 + */
2661 +int dpio_set_irq_mask(struct fsl_mc_io *mc_io,
2662 + uint32_t cmd_flags,
2663 + uint16_t token,
2664 + uint8_t irq_index,
2665 + uint32_t mask);
2666 +
2667 +/**
2668 + * dpio_get_irq_mask() - Get interrupt mask.
2669 + * @mc_io: Pointer to MC portal's I/O object
2670 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2671 + * @token: Token of DPIO object
2672 + * @irq_index: The interrupt index to configure
2673 + * @mask: Returned event mask to trigger interrupt
2674 + *
2675 + * Every interrupt can have up to 32 causes and the interrupt model supports
2676 + * masking/unmasking each cause independently
2677 + *
2678 + * Return: '0' on Success; Error code otherwise.
2679 + */
2680 +int dpio_get_irq_mask(struct fsl_mc_io *mc_io,
2681 + uint32_t cmd_flags,
2682 + uint16_t token,
2683 + uint8_t irq_index,
2684 + uint32_t *mask);
2685 +
2686 +/**
2687 + * dpio_get_irq_status() - Get the current status of any pending interrupts.
2688 + * @mc_io: Pointer to MC portal's I/O object
2689 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2690 + * @token: Token of DPIO object
2691 + * @irq_index: The interrupt index to configure
2692 + * @status: Returned interrupts status - one bit per cause:
2693 + * 0 = no interrupt pending
2694 + * 1 = interrupt pending
2695 + *
2696 + * Return: '0' on Success; Error code otherwise.
2697 + */
2698 +int dpio_get_irq_status(struct fsl_mc_io *mc_io,
2699 + uint32_t cmd_flags,
2700 + uint16_t token,
2701 + uint8_t irq_index,
2702 + uint32_t *status);
2703 +
2704 +/**
2705 + * dpio_clear_irq_status() - Clear a pending interrupt's status
2706 + * @mc_io: Pointer to MC portal's I/O object
2707 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2708 + * @token: Token of DPIO object
2709 + * @irq_index: The interrupt index to configure
2710 + * @status: bits to clear (W1C) - one bit per cause:
2711 + * 0 = don't change
2712 + * 1 = clear status bit
2713 + *
2714 + * Return: '0' on Success; Error code otherwise.
2715 + */
2716 +int dpio_clear_irq_status(struct fsl_mc_io *mc_io,
2717 + uint32_t cmd_flags,
2718 + uint16_t token,
2719 + uint8_t irq_index,
2720 + uint32_t status);
2721 +
2722 +/**
2723 + * struct dpio_attr - Structure representing DPIO attributes
2724 + * @id: DPIO object ID
2725 + * @version: DPIO version
2726 + * @qbman_portal_ce_offset: offset of the software portal cache-enabled area
2727 + * @qbman_portal_ci_offset: offset of the software portal cache-inhibited area
2728 + * @qbman_portal_id: Software portal ID
2729 + * @channel_mode: Notification channel mode
2730 + * @num_priorities: Number of priorities for the notification channel (1-8);
2731 + * relevant only if 'channel_mode = DPIO_LOCAL_CHANNEL'
2732 + * @qbman_version: QBMAN version
2733 + */
2734 +struct dpio_attr {
2735 + int id;
2736 + /**
2737 + * struct version - DPIO version
2738 + * @major: DPIO major version
2739 + * @minor: DPIO minor version
2740 + */
2741 + struct {
2742 + uint16_t major;
2743 + uint16_t minor;
2744 + } version;
2745 + uint64_t qbman_portal_ce_offset;
2746 + uint64_t qbman_portal_ci_offset;
2747 + uint16_t qbman_portal_id;
2748 + enum dpio_channel_mode channel_mode;
2749 + uint8_t num_priorities;
2750 + uint32_t qbman_version;
2751 +};
2752 +
2753 +/**
2754 + * dpio_get_attributes() - Retrieve DPIO attributes
2755 + * @mc_io: Pointer to MC portal's I/O object
2756 + * @cmd_flags: Command flags; one or more of 'MC_CMD_FLAG_'
2757 + * @token: Token of DPIO object
2758 + * @attr: Returned object's attributes
2759 + *
2760 + * Return: '0' on Success; Error code otherwise
2761 + */
2762 +int dpio_get_attributes(struct fsl_mc_io *mc_io,
2763 + uint32_t cmd_flags,
2764 + uint16_t token,
2765 + struct dpio_attr *attr);
2766 +#endif /* __FSL_DPIO_H */
2767 --- /dev/null
2768 +++ b/drivers/staging/fsl-mc/bus/dpio/fsl_dpio_cmd.h
2769 @@ -0,0 +1,184 @@
2770 +/* Copyright 2013-2015 Freescale Semiconductor Inc.
2771 + *
2772 + * Redistribution and use in source and binary forms, with or without
2773 + * modification, are permitted provided that the following conditions are met:
2774 + * * Redistributions of source code must retain the above copyright
2775 + * notice, this list of conditions and the following disclaimer.
2776 + * * Redistributions in binary form must reproduce the above copyright
2777 + * notice, this list of conditions and the following disclaimer in the
2778 + * documentation and/or other materials provided with the distribution.
2779 + * * Neither the name of the above-listed copyright holders nor the
2780 + * names of any contributors may be used to endorse or promote products
2781 + * derived from this software without specific prior written permission.
2782 + *
2783 + *
2784 + * ALTERNATIVELY, this software may be distributed under the terms of the
2785 + * GNU General Public License ("GPL") as published by the Free Software
2786 + * Foundation, either version 2 of that License or (at your option) any
2787 + * later version.
2788 + *
2789 + * THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS"
2790 + * AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
2791 + * IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE
2792 + * ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT HOLDERS OR CONTRIBUTORS BE
2793 + * LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
2794 + * CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
2795 + * SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS
2796 + * INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN
2797 + * CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE)
2798 + * ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE
2799 + * POSSIBILITY OF SUCH DAMAGE.
2800 + */
2801 +#ifndef _FSL_DPIO_CMD_H
2802 +#define _FSL_DPIO_CMD_H
2803 +
2804 +/* DPIO Version */
2805 +#define DPIO_VER_MAJOR 3
2806 +#define DPIO_VER_MINOR 2
2807 +
2808 +/* Command IDs */
2809 +#define DPIO_CMDID_CLOSE 0x800
2810 +#define DPIO_CMDID_OPEN 0x803
2811 +#define DPIO_CMDID_CREATE 0x903
2812 +#define DPIO_CMDID_DESTROY 0x900
2813 +
2814 +#define DPIO_CMDID_ENABLE 0x002
2815 +#define DPIO_CMDID_DISABLE 0x003
2816 +#define DPIO_CMDID_GET_ATTR 0x004
2817 +#define DPIO_CMDID_RESET 0x005
2818 +#define DPIO_CMDID_IS_ENABLED 0x006
2819 +
2820 +#define DPIO_CMDID_SET_IRQ 0x010
2821 +#define DPIO_CMDID_GET_IRQ 0x011
2822 +#define DPIO_CMDID_SET_IRQ_ENABLE 0x012
2823 +#define DPIO_CMDID_GET_IRQ_ENABLE 0x013
2824 +#define DPIO_CMDID_SET_IRQ_MASK 0x014
2825 +#define DPIO_CMDID_GET_IRQ_MASK 0x015
2826 +#define DPIO_CMDID_GET_IRQ_STATUS 0x016
2827 +#define DPIO_CMDID_CLEAR_IRQ_STATUS 0x017
2828 +
2829 +#define DPIO_CMDID_SET_STASHING_DEST 0x120
2830 +#define DPIO_CMDID_GET_STASHING_DEST 0x121
2831 +#define DPIO_CMDID_ADD_STATIC_DEQUEUE_CHANNEL 0x122
2832 +#define DPIO_CMDID_REMOVE_STATIC_DEQUEUE_CHANNEL 0x123
2833 +
2834 +/* cmd, param, offset, width, type, arg_name */
2835 +#define DPIO_CMD_OPEN(cmd, dpio_id) \
2836 + MC_CMD_OP(cmd, 0, 0, 32, int, dpio_id)
2837 +
2838 +/* cmd, param, offset, width, type, arg_name */
2839 +#define DPIO_CMD_CREATE(cmd, cfg) \
2840 +do { \
2841 + MC_CMD_OP(cmd, 0, 16, 2, enum dpio_channel_mode, \
2842 + cfg->channel_mode);\
2843 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, cfg->num_priorities);\
2844 +} while (0)
2845 +
2846 +/* cmd, param, offset, width, type, arg_name */
2847 +#define DPIO_RSP_IS_ENABLED(cmd, en) \
2848 + MC_RSP_OP(cmd, 0, 0, 1, int, en)
2849 +
2850 +/* cmd, param, offset, width, type, arg_name */
2851 +#define DPIO_CMD_SET_IRQ(cmd, irq_index, irq_cfg) \
2852 +do { \
2853 + MC_CMD_OP(cmd, 0, 0, 8, uint8_t, irq_index);\
2854 + MC_CMD_OP(cmd, 0, 32, 32, uint32_t, irq_cfg->val);\
2855 + MC_CMD_OP(cmd, 1, 0, 64, uint64_t, irq_cfg->addr);\
2856 + MC_CMD_OP(cmd, 2, 0, 32, int, irq_cfg->irq_num); \
2857 +} while (0)
2858 +
2859 +/* cmd, param, offset, width, type, arg_name */
2860 +#define DPIO_CMD_GET_IRQ(cmd, irq_index) \
2861 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
2862 +
2863 +/* cmd, param, offset, width, type, arg_name */
2864 +#define DPIO_RSP_GET_IRQ(cmd, type, irq_cfg) \
2865 +do { \
2866 + MC_RSP_OP(cmd, 0, 0, 32, uint32_t, irq_cfg->val); \
2867 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, irq_cfg->addr); \
2868 + MC_RSP_OP(cmd, 2, 0, 32, int, irq_cfg->irq_num); \
2869 + MC_RSP_OP(cmd, 2, 32, 32, int, type); \
2870 +} while (0)
2871 +
2872 +/* cmd, param, offset, width, type, arg_name */
2873 +#define DPIO_CMD_SET_IRQ_ENABLE(cmd, irq_index, en) \
2874 +do { \
2875 + MC_CMD_OP(cmd, 0, 0, 8, uint8_t, en); \
2876 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
2877 +} while (0)
2878 +
2879 +/* cmd, param, offset, width, type, arg_name */
2880 +#define DPIO_CMD_GET_IRQ_ENABLE(cmd, irq_index) \
2881 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
2882 +
2883 +/* cmd, param, offset, width, type, arg_name */
2884 +#define DPIO_RSP_GET_IRQ_ENABLE(cmd, en) \
2885 + MC_RSP_OP(cmd, 0, 0, 8, uint8_t, en)
2886 +
2887 +/* cmd, param, offset, width, type, arg_name */
2888 +#define DPIO_CMD_SET_IRQ_MASK(cmd, irq_index, mask) \
2889 +do { \
2890 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, mask); \
2891 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
2892 +} while (0)
2893 +
2894 +/* cmd, param, offset, width, type, arg_name */
2895 +#define DPIO_CMD_GET_IRQ_MASK(cmd, irq_index) \
2896 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index)
2897 +
2898 +/* cmd, param, offset, width, type, arg_name */
2899 +#define DPIO_RSP_GET_IRQ_MASK(cmd, mask) \
2900 + MC_RSP_OP(cmd, 0, 0, 32, uint32_t, mask)
2901 +
2902 +/* cmd, param, offset, width, type, arg_name */
2903 +#define DPIO_CMD_GET_IRQ_STATUS(cmd, irq_index, status) \
2904 +do { \
2905 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, status);\
2906 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
2907 +} while (0)
2908 +
2909 +/* cmd, param, offset, width, type, arg_name */
2910 +#define DPIO_RSP_GET_IRQ_STATUS(cmd, status) \
2911 + MC_RSP_OP(cmd, 0, 0, 32, uint32_t, status)
2912 +
2913 +/* cmd, param, offset, width, type, arg_name */
2914 +#define DPIO_CMD_CLEAR_IRQ_STATUS(cmd, irq_index, status) \
2915 +do { \
2916 + MC_CMD_OP(cmd, 0, 0, 32, uint32_t, status); \
2917 + MC_CMD_OP(cmd, 0, 32, 8, uint8_t, irq_index);\
2918 +} while (0)
2919 +
2920 +/* cmd, param, offset, width, type, arg_name */
2921 +#define DPIO_RSP_GET_ATTR(cmd, attr) \
2922 +do { \
2923 + MC_RSP_OP(cmd, 0, 0, 32, int, attr->id);\
2924 + MC_RSP_OP(cmd, 0, 32, 16, uint16_t, attr->qbman_portal_id);\
2925 + MC_RSP_OP(cmd, 0, 48, 8, uint8_t, attr->num_priorities);\
2926 + MC_RSP_OP(cmd, 0, 56, 4, enum dpio_channel_mode, attr->channel_mode);\
2927 + MC_RSP_OP(cmd, 1, 0, 64, uint64_t, attr->qbman_portal_ce_offset);\
2928 + MC_RSP_OP(cmd, 2, 0, 64, uint64_t, attr->qbman_portal_ci_offset);\
2929 + MC_RSP_OP(cmd, 3, 0, 16, uint16_t, attr->version.major);\
2930 + MC_RSP_OP(cmd, 3, 16, 16, uint16_t, attr->version.minor);\
2931 + MC_RSP_OP(cmd, 3, 32, 32, uint32_t, attr->qbman_version);\
2932 +} while (0)
2933 +
2934 +/* cmd, param, offset, width, type, arg_name */
2935 +#define DPIO_CMD_SET_STASHING_DEST(cmd, sdest) \
2936 + MC_CMD_OP(cmd, 0, 0, 8, uint8_t, sdest)
2937 +
2938 +/* cmd, param, offset, width, type, arg_name */
2939 +#define DPIO_RSP_GET_STASHING_DEST(cmd, sdest) \
2940 + MC_RSP_OP(cmd, 0, 0, 8, uint8_t, sdest)
2941 +
2942 +/* cmd, param, offset, width, type, arg_name */
2943 +#define DPIO_CMD_ADD_STATIC_DEQUEUE_CHANNEL(cmd, dpcon_id) \
2944 + MC_CMD_OP(cmd, 0, 0, 32, int, dpcon_id)
2945 +
2946 +/* cmd, param, offset, width, type, arg_name */
2947 +#define DPIO_RSP_ADD_STATIC_DEQUEUE_CHANNEL(cmd, channel_index) \
2948 + MC_RSP_OP(cmd, 0, 0, 8, uint8_t, channel_index)
2949 +
2950 +/* cmd, param, offset, width, type, arg_name */
2951 +#define DPIO_CMD_REMOVE_STATIC_DEQUEUE_CHANNEL(cmd, dpcon_id) \
2952 + MC_CMD_OP(cmd, 0, 0, 32, int, dpcon_id)
2953 +#endif /* _FSL_DPIO_CMD_H */
2954 --- /dev/null
2955 +++ b/drivers/staging/fsl-mc/bus/dpio/fsl_qbman_base.h
2956 @@ -0,0 +1,123 @@
2957 +/* Copyright (C) 2014 Freescale Semiconductor, Inc.
2958 + *
2959 + * Redistribution and use in source and binary forms, with or without
2960 + * modification, are permitted provided that the following conditions are met:
2961 + * * Redistributions of source code must retain the above copyright
2962 + * notice, this list of conditions and the following disclaimer.
2963 + * * Redistributions in binary form must reproduce the above copyright
2964 + * notice, this list of conditions and the following disclaimer in the
2965 + * documentation and/or other materials provided with the distribution.
2966 + * * Neither the name of Freescale Semiconductor nor the
2967 + * names of its contributors may be used to endorse or promote products
2968 + * derived from this software without specific prior written permission.
2969 + *
2970 + *
2971 + * ALTERNATIVELY, this software may be distributed under the terms of the
2972 + * GNU General Public License ("GPL") as published by the Free Software
2973 + * Foundation, either version 2 of that License or (at your option) any
2974 + * later version.
2975 + *
2976 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
2977 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
2978 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
2979 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
2980 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
2981 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
2982 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
2983 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
2984 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
2985 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
2986 + */
2987 +#ifndef _FSL_QBMAN_BASE_H
2988 +#define _FSL_QBMAN_BASE_H
2989 +
2990 +/**
2991 + * struct qbman_block_desc - qbman block descriptor structure
2992 + *
2993 + * Descriptor for a QBMan instance on the SoC. On partitions/targets that do not
2994 + * control this QBMan instance, these values may simply be place-holders. The
2995 + * idea is simply that we be able to distinguish between them, eg. so that SWP
2996 + * descriptors can identify which QBMan instance they belong to.
2997 + */
2998 +struct qbman_block_desc {
2999 + void *ccsr_reg_bar; /* CCSR register map */
3000 + int irq_rerr; /* Recoverable error interrupt line */
3001 + int irq_nrerr; /* Non-recoverable error interrupt line */
3002 +};
3003 +
3004 +/**
3005 + * struct qbman_swp_desc - qbman software portal descriptor structure
3006 + *
3007 + * Descriptor for a QBMan software portal, expressed in terms that make sense to
3008 + * the user context. Ie. on MC, this information is likely to be true-physical,
3009 + * and instantiated statically at compile-time. On GPP, this information is
3010 + * likely to be obtained via "discovery" over a partition's "layerscape bus"
3011 + * (ie. in response to a MC portal command), and would take into account any
3012 + * virtualisation of the GPP user's address space and/or interrupt numbering.
3013 + */
3014 +struct qbman_swp_desc {
3015 + const struct qbman_block_desc *block; /* The QBMan instance */
3016 + void *cena_bar; /* Cache-enabled portal register map */
3017 + void *cinh_bar; /* Cache-inhibited portal register map */
3018 + uint32_t qman_version;
3019 +};
3020 +
3021 +/* Driver object for managing a QBMan portal */
3022 +struct qbman_swp;
3023 +
3024 +/**
3025 + * struct qbman_fd - basci structure for qbman frame descriptor
3026 + *
3027 + * Place-holder for FDs, we represent it via the simplest form that we need for
3028 + * now. Different overlays may be needed to support different options, etc. (It
3029 + * is impractical to define One True Struct, because the resulting encoding
3030 + * routines (lots of read-modify-writes) would be worst-case performance whether
3031 + * or not circumstances required them.)
3032 + *
3033 + * Note, as with all data-structures exchanged between software and hardware (be
3034 + * they located in the portal register map or DMA'd to and from main-memory),
3035 + * the driver ensures that the caller of the driver API sees the data-structures
3036 + * in host-endianness. "struct qbman_fd" is no exception. The 32-bit words
3037 + * contained within this structure are represented in host-endianness, even if
3038 + * hardware always treats them as little-endian. As such, if any of these fields
3039 + * are interpreted in a binary (rather than numerical) fashion by hardware
3040 + * blocks (eg. accelerators), then the user should be careful. We illustrate
3041 + * with an example;
3042 + *
3043 + * Suppose the desired behaviour of an accelerator is controlled by the "frc"
3044 + * field of the FDs that are sent to it. Suppose also that the behaviour desired
3045 + * by the user corresponds to an "frc" value which is expressed as the literal
3046 + * sequence of bytes 0xfe, 0xed, 0xab, and 0xba. So "frc" should be the 32-bit
3047 + * value in which 0xfe is the first byte and 0xba is the last byte, and as
3048 + * hardware is little-endian, this amounts to a 32-bit "value" of 0xbaabedfe. If
3049 + * the software is little-endian also, this can simply be achieved by setting
3050 + * frc=0xbaabedfe. On the other hand, if software is big-endian, it should set
3051 + * frc=0xfeedabba! The best away of avoiding trouble with this sort of thing is
3052 + * to treat the 32-bit words as numerical values, in which the offset of a field
3053 + * from the beginning of the first byte (as required or generated by hardware)
3054 + * is numerically encoded by a left-shift (ie. by raising the field to a
3055 + * corresponding power of 2). Ie. in the current example, software could set
3056 + * "frc" in the following way, and it would work correctly on both little-endian
3057 + * and big-endian operation;
3058 + * fd.frc = (0xfe << 0) | (0xed << 8) | (0xab << 16) | (0xba << 24);
3059 + */
3060 +struct qbman_fd {
3061 + union {
3062 + uint32_t words[8];
3063 + struct qbman_fd_simple {
3064 + uint32_t addr_lo;
3065 + uint32_t addr_hi;
3066 + uint32_t len;
3067 + /* offset in the MS 16 bits, BPID in the LS 16 bits */
3068 + uint32_t bpid_offset;
3069 + uint32_t frc; /* frame context */
3070 + /* "err", "va", "cbmt", "asal", [...] */
3071 + uint32_t ctrl;
3072 + /* flow context */
3073 + uint32_t flc_lo;
3074 + uint32_t flc_hi;
3075 + } simple;
3076 + };
3077 +};
3078 +
3079 +#endif /* !_FSL_QBMAN_BASE_H */
3080 --- /dev/null
3081 +++ b/drivers/staging/fsl-mc/bus/dpio/fsl_qbman_portal.h
3082 @@ -0,0 +1,753 @@
3083 +/* Copyright (C) 2014 Freescale Semiconductor, Inc.
3084 + *
3085 + * Redistribution and use in source and binary forms, with or without
3086 + * modification, are permitted provided that the following conditions are met:
3087 + * * Redistributions of source code must retain the above copyright
3088 + * notice, this list of conditions and the following disclaimer.
3089 + * * Redistributions in binary form must reproduce the above copyright
3090 + * notice, this list of conditions and the following disclaimer in the
3091 + * documentation and/or other materials provided with the distribution.
3092 + * * Neither the name of Freescale Semiconductor nor the
3093 + * names of its contributors may be used to endorse or promote products
3094 + * derived from this software without specific prior written permission.
3095 + *
3096 + *
3097 + * ALTERNATIVELY, this software may be distributed under the terms of the
3098 + * GNU General Public License ("GPL") as published by the Free Software
3099 + * Foundation, either version 2 of that License or (at your option) any
3100 + * later version.
3101 + *
3102 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
3103 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
3104 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
3105 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
3106 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
3107 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
3108 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
3109 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
3110 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
3111 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
3112 + */
3113 +#ifndef _FSL_QBMAN_PORTAL_H
3114 +#define _FSL_QBMAN_PORTAL_H
3115 +
3116 +#include "fsl_qbman_base.h"
3117 +
3118 +/**
3119 + * qbman_swp_init() - Create a functional object representing the given
3120 + * QBMan portal descriptor.
3121 + * @d: the given qbman swp descriptor
3122 + *
3123 + * Return qbman_swp portal object for success, NULL if the object cannot
3124 + * be created.
3125 + */
3126 +struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d);
3127 +/**
3128 + * qbman_swp_finish() - Create and destroy a functional object representing
3129 + * the given QBMan portal descriptor.
3130 + * @p: the qbman_swp object to be destroyed.
3131 + *
3132 + */
3133 +void qbman_swp_finish(struct qbman_swp *p);
3134 +
3135 +/**
3136 + * qbman_swp_get_desc() - Get the descriptor of the given portal object.
3137 + * @p: the given portal object.
3138 + *
3139 + * Return the descriptor for this portal.
3140 + */
3141 +const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p);
3142 +
3143 + /**************/
3144 + /* Interrupts */
3145 + /**************/
3146 +
3147 +/* See the QBMan driver API documentation for details on the interrupt
3148 + * mechanisms. */
3149 +#define QBMAN_SWP_INTERRUPT_EQRI ((uint32_t)0x00000001)
3150 +#define QBMAN_SWP_INTERRUPT_EQDI ((uint32_t)0x00000002)
3151 +#define QBMAN_SWP_INTERRUPT_DQRI ((uint32_t)0x00000004)
3152 +#define QBMAN_SWP_INTERRUPT_RCRI ((uint32_t)0x00000008)
3153 +#define QBMAN_SWP_INTERRUPT_RCDI ((uint32_t)0x00000010)
3154 +#define QBMAN_SWP_INTERRUPT_VDCI ((uint32_t)0x00000020)
3155 +
3156 +/**
3157 + * qbman_swp_interrupt_get_vanish()
3158 + * qbman_swp_interrupt_set_vanish() - Get/Set the data in software portal
3159 + * interrupt status disable register.
3160 + * @p: the given software portal object.
3161 + * @mask: The mask to set in SWP_IDSR register.
3162 + *
3163 + * Return the settings in SWP_ISDR register for Get function.
3164 + */
3165 +uint32_t qbman_swp_interrupt_get_vanish(struct qbman_swp *p);
3166 +void qbman_swp_interrupt_set_vanish(struct qbman_swp *p, uint32_t mask);
3167 +
3168 +/**
3169 + * qbman_swp_interrupt_read_status()
3170 + * qbman_swp_interrupt_clear_status() - Get/Set the data in software portal
3171 + * interrupt status register.
3172 + * @p: the given software portal object.
3173 + * @mask: The mask to set in SWP_ISR register.
3174 + *
3175 + * Return the settings in SWP_ISR register for Get function.
3176 + *
3177 + */
3178 +uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p);
3179 +void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask);
3180 +
3181 +/**
3182 + * qbman_swp_interrupt_get_trigger()
3183 + * qbman_swp_interrupt_set_trigger() - Get/Set the data in software portal
3184 + * interrupt enable register.
3185 + * @p: the given software portal object.
3186 + * @mask: The mask to set in SWP_IER register.
3187 + *
3188 + * Return the settings in SWP_IER register for Get function.
3189 + */
3190 +uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p);
3191 +void qbman_swp_interrupt_set_trigger(struct qbman_swp *p, uint32_t mask);
3192 +
3193 +/**
3194 + * qbman_swp_interrupt_get_inhibit()
3195 + * qbman_swp_interrupt_set_inhibit() - Set/Set the data in software portal
3196 + * interrupt inhibit register.
3197 + * @p: the given software portal object.
3198 + * @mask: The mask to set in SWP_IIR register.
3199 + *
3200 + * Return the settings in SWP_IIR register for Get function.
3201 + */
3202 +int qbman_swp_interrupt_get_inhibit(struct qbman_swp *p);
3203 +void qbman_swp_interrupt_set_inhibit(struct qbman_swp *p, int inhibit);
3204 +
3205 + /************/
3206 + /* Dequeues */
3207 + /************/
3208 +
3209 +/* See the QBMan driver API documentation for details on the enqueue
3210 + * mechanisms. NB: the use of a 'dpaa2_' prefix for this type is because it is
3211 + * primarily used by the "DPIO" layer that sits above (and hides) the QBMan
3212 + * driver. The structure is defined in the DPIO interface, but to avoid circular
3213 + * dependencies we just pre/re-declare it here opaquely. */
3214 +struct dpaa2_dq;
3215 +
3216 +/* ------------------- */
3217 +/* Push-mode dequeuing */
3218 +/* ------------------- */
3219 +
3220 +/**
3221 + * qbman_swp_push_get() - Get the push dequeue setup.
3222 + * @p: the software portal object.
3223 + * @channel_idx: the channel index to query.
3224 + * @enabled: returned boolean to show whether the push dequeue is enabled for
3225 + * the given channel.
3226 + */
3227 +void qbman_swp_push_get(struct qbman_swp *, uint8_t channel_idx, int *enabled);
3228 +/**
3229 + * qbman_swp_push_set() - Enable or disable push dequeue.
3230 + * @p: the software portal object.
3231 + * @channel_idx: the channel index..
3232 + * @enable: enable or disable push dequeue.
3233 + *
3234 + * The user of a portal can enable and disable push-mode dequeuing of up to 16
3235 + * channels independently. It does not specify this toggling by channel IDs, but
3236 + * rather by specifying the index (from 0 to 15) that has been mapped to the
3237 + * desired channel.
3238 + */
3239 +void qbman_swp_push_set(struct qbman_swp *, uint8_t channel_idx, int enable);
3240 +
3241 +/* ------------------- */
3242 +/* Pull-mode dequeuing */
3243 +/* ------------------- */
3244 +
3245 +/**
3246 + * struct qbman_pull_desc - the structure for pull dequeue descriptor
3247 + */
3248 +struct qbman_pull_desc {
3249 + uint32_t dont_manipulate_directly[6];
3250 +};
3251 +
3252 +enum qbman_pull_type_e {
3253 + /* dequeue with priority precedence, respect intra-class scheduling */
3254 + qbman_pull_type_prio = 1,
3255 + /* dequeue with active FQ precedence, respect ICS */
3256 + qbman_pull_type_active,
3257 + /* dequeue with active FQ precedence, no ICS */
3258 + qbman_pull_type_active_noics
3259 +};
3260 +
3261 +/**
3262 + * qbman_pull_desc_clear() - Clear the contents of a descriptor to
3263 + * default/starting state.
3264 + * @d: the pull dequeue descriptor to be cleared.
3265 + */
3266 +void qbman_pull_desc_clear(struct qbman_pull_desc *d);
3267 +
3268 +/**
3269 + * qbman_pull_desc_set_storage()- Set the pull dequeue storage
3270 + * @d: the pull dequeue descriptor to be set.
3271 + * @storage: the pointer of the memory to store the dequeue result.
3272 + * @storage_phys: the physical address of the storage memory.
3273 + * @stash: to indicate whether write allocate is enabled.
3274 + *
3275 + * If not called, or if called with 'storage' as NULL, the result pull dequeues
3276 + * will produce results to DQRR. If 'storage' is non-NULL, then results are
3277 + * produced to the given memory location (using the physical/DMA address which
3278 + * the caller provides in 'storage_phys'), and 'stash' controls whether or not
3279 + * those writes to main-memory express a cache-warming attribute.
3280 + */
3281 +void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
3282 + struct dpaa2_dq *storage,
3283 + dma_addr_t storage_phys,
3284 + int stash);
3285 +/**
3286 + * qbman_pull_desc_set_numframes() - Set the number of frames to be dequeued.
3287 + * @d: the pull dequeue descriptor to be set.
3288 + * @numframes: number of frames to be set, must be between 1 and 16, inclusive.
3289 + */
3290 +void qbman_pull_desc_set_numframes(struct qbman_pull_desc *, uint8_t numframes);
3291 +
3292 +/**
3293 + * qbman_pull_desc_set_fq() - Set fqid from which the dequeue command dequeues.
3294 + * @fqid: the frame queue index of the given FQ.
3295 + *
3296 + * qbman_pull_desc_set_wq() - Set wqid from which the dequeue command dequeues.
3297 + * @wqid: composed of channel id and wqid within the channel.
3298 + * @dct: the dequeue command type.
3299 + *
3300 + * qbman_pull_desc_set_channel() - Set channelid from which the dequeue command
3301 + * dequeues.
3302 + * @chid: the channel id to be dequeued.
3303 + * @dct: the dequeue command type.
3304 + *
3305 + * Exactly one of the following descriptor "actions" should be set. (Calling any
3306 + * one of these will replace the effect of any prior call to one of these.)
3307 + * - pull dequeue from the given frame queue (FQ)
3308 + * - pull dequeue from any FQ in the given work queue (WQ)
3309 + * - pull dequeue from any FQ in any WQ in the given channel
3310 + */
3311 +void qbman_pull_desc_set_fq(struct qbman_pull_desc *, uint32_t fqid);
3312 +void qbman_pull_desc_set_wq(struct qbman_pull_desc *, uint32_t wqid,
3313 + enum qbman_pull_type_e dct);
3314 +void qbman_pull_desc_set_channel(struct qbman_pull_desc *, uint32_t chid,
3315 + enum qbman_pull_type_e dct);
3316 +
3317 +/**
3318 + * qbman_swp_pull() - Issue the pull dequeue command
3319 + * @s: the software portal object.
3320 + * @d: the software portal descriptor which has been configured with
3321 + * the set of qbman_pull_desc_set_*() calls.
3322 + *
3323 + * Return 0 for success, and -EBUSY if the software portal is not ready
3324 + * to do pull dequeue.
3325 + */
3326 +int qbman_swp_pull(struct qbman_swp *, struct qbman_pull_desc *d);
3327 +
3328 +/* -------------------------------- */
3329 +/* Polling DQRR for dequeue results */
3330 +/* -------------------------------- */
3331 +
3332 +/**
3333 + * qbman_swp_dqrr_next() - Get an valid DQRR entry.
3334 + * @s: the software portal object.
3335 + *
3336 + * Return NULL if there are no unconsumed DQRR entries. Return a DQRR entry
3337 + * only once, so repeated calls can return a sequence of DQRR entries, without
3338 + * requiring they be consumed immediately or in any particular order.
3339 + */
3340 +const struct dpaa2_dq *qbman_swp_dqrr_next(struct qbman_swp *s);
3341 +
3342 +/**
3343 + * qbman_swp_dqrr_consume() - Consume DQRR entries previously returned from
3344 + * qbman_swp_dqrr_next().
3345 + * @s: the software portal object.
3346 + * @dq: the DQRR entry to be consumed.
3347 + */
3348 +void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct dpaa2_dq *dq);
3349 +
3350 +/* ------------------------------------------------- */
3351 +/* Polling user-provided storage for dequeue results */
3352 +/* ------------------------------------------------- */
3353 +/**
3354 + * qbman_result_has_new_result() - Check and get the dequeue response from the
3355 + * dq storage memory set in pull dequeue command
3356 + * @s: the software portal object.
3357 + * @dq: the dequeue result read from the memory.
3358 + *
3359 + * Only used for user-provided storage of dequeue results, not DQRR. For
3360 + * efficiency purposes, the driver will perform any required endianness
3361 + * conversion to ensure that the user's dequeue result storage is in host-endian
3362 + * format (whether or not that is the same as the little-endian format that
3363 + * hardware DMA'd to the user's storage). As such, once the user has called
3364 + * qbman_result_has_new_result() and been returned a valid dequeue result,
3365 + * they should not call it again on the same memory location (except of course
3366 + * if another dequeue command has been executed to produce a new result to that
3367 + * location).
3368 + *
3369 + * Return 1 for getting a valid dequeue result, or 0 for not getting a valid
3370 + * dequeue result.
3371 + */
3372 +int qbman_result_has_new_result(struct qbman_swp *,
3373 + const struct dpaa2_dq *);
3374 +
3375 +/* -------------------------------------------------------- */
3376 +/* Parsing dequeue entries (DQRR and user-provided storage) */
3377 +/* -------------------------------------------------------- */
3378 +
3379 +/**
3380 + * qbman_result_is_DQ() - check the dequeue result is a dequeue response or not
3381 + * @dq: the dequeue result to be checked.
3382 + *
3383 + * DQRR entries may contain non-dequeue results, ie. notifications
3384 + */
3385 +int qbman_result_is_DQ(const struct dpaa2_dq *);
3386 +
3387 +/**
3388 + * qbman_result_is_SCN() - Check the dequeue result is notification or not
3389 + * @dq: the dequeue result to be checked.
3390 + *
3391 + * All the non-dequeue results (FQDAN/CDAN/CSCN/...) are "state change
3392 + * notifications" of one type or another. Some APIs apply to all of them, of the
3393 + * form qbman_result_SCN_***().
3394 + */
3395 +static inline int qbman_result_is_SCN(const struct dpaa2_dq *dq)
3396 +{
3397 + return !qbman_result_is_DQ(dq);
3398 +}
3399 +
3400 +/**
3401 + * Recognise different notification types, only required if the user allows for
3402 + * these to occur, and cares about them when they do.
3403 + */
3404 +int qbman_result_is_FQDAN(const struct dpaa2_dq *);
3405 + /* FQ Data Availability */
3406 +int qbman_result_is_CDAN(const struct dpaa2_dq *);
3407 + /* Channel Data Availability */
3408 +int qbman_result_is_CSCN(const struct dpaa2_dq *);
3409 + /* Congestion State Change */
3410 +int qbman_result_is_BPSCN(const struct dpaa2_dq *);
3411 + /* Buffer Pool State Change */
3412 +int qbman_result_is_CGCU(const struct dpaa2_dq *);
3413 + /* Congestion Group Count Update */
3414 +/* Frame queue state change notifications; (FQDAN in theory counts too as it
3415 + * leaves a FQ parked, but it is primarily a data availability notification) */
3416 +int qbman_result_is_FQRN(const struct dpaa2_dq *); /* Retirement */
3417 +int qbman_result_is_FQRNI(const struct dpaa2_dq *);
3418 + /* Retirement Immediate */
3419 +int qbman_result_is_FQPN(const struct dpaa2_dq *); /* Park */
3420 +
3421 +/* NB: for parsing dequeue results (when "is_DQ" is TRUE), use the higher-layer
3422 + * dpaa2_dq_*() functions. */
3423 +
3424 +/* State-change notifications (FQDAN/CDAN/CSCN/...). */
3425 +/**
3426 + * qbman_result_SCN_state() - Get the state field in State-change notification
3427 + */
3428 +uint8_t qbman_result_SCN_state(const struct dpaa2_dq *);
3429 +/**
3430 + * qbman_result_SCN_rid() - Get the resource id in State-change notification
3431 + */
3432 +uint32_t qbman_result_SCN_rid(const struct dpaa2_dq *);
3433 +/**
3434 + * qbman_result_SCN_ctx() - Get the context data in State-change notification
3435 + */
3436 +uint64_t qbman_result_SCN_ctx(const struct dpaa2_dq *);
3437 +/**
3438 + * qbman_result_SCN_state_in_mem() - Get the state field in State-change
3439 + * notification which is written to memory instead of DQRR.
3440 + */
3441 +uint8_t qbman_result_SCN_state_in_mem(const struct dpaa2_dq *);
3442 +/**
3443 + * qbman_result_SCN_rid_in_mem() - Get the resource id in State-change
3444 + * notification which is written to memory instead of DQRR.
3445 + */
3446 +uint32_t qbman_result_SCN_rid_in_mem(const struct dpaa2_dq *);
3447 +
3448 +/* Type-specific "resource IDs". Mainly for illustration purposes, though it
3449 + * also gives the appropriate type widths. */
3450 +#define qbman_result_FQDAN_fqid(dq) qbman_result_SCN_rid(dq)
3451 +#define qbman_result_FQRN_fqid(dq) qbman_result_SCN_rid(dq)
3452 +#define qbman_result_FQRNI_fqid(dq) qbman_result_SCN_rid(dq)
3453 +#define qbman_result_FQPN_fqid(dq) qbman_result_SCN_rid(dq)
3454 +#define qbman_result_CDAN_cid(dq) ((uint16_t)qbman_result_SCN_rid(dq))
3455 +#define qbman_result_CSCN_cgid(dq) ((uint16_t)qbman_result_SCN_rid(dq))
3456 +
3457 +/**
3458 + * qbman_result_bpscn_bpid() - Get the bpid from BPSCN
3459 + *
3460 + * Return the buffer pool id.
3461 + */
3462 +uint16_t qbman_result_bpscn_bpid(const struct dpaa2_dq *);
3463 +/**
3464 + * qbman_result_bpscn_has_free_bufs() - Check whether there are free
3465 + * buffers in the pool from BPSCN.
3466 + *
3467 + * Return the number of free buffers.
3468 + */
3469 +int qbman_result_bpscn_has_free_bufs(const struct dpaa2_dq *);
3470 +/**
3471 + * qbman_result_bpscn_is_depleted() - Check BPSCN to see whether the
3472 + * buffer pool is depleted.
3473 + *
3474 + * Return the status of buffer pool depletion.
3475 + */
3476 +int qbman_result_bpscn_is_depleted(const struct dpaa2_dq *);
3477 +/**
3478 + * qbman_result_bpscn_is_surplus() - Check BPSCN to see whether the buffer
3479 + * pool is surplus or not.
3480 + *
3481 + * Return the status of buffer pool surplus.
3482 + */
3483 +int qbman_result_bpscn_is_surplus(const struct dpaa2_dq *);
3484 +/**
3485 + * qbman_result_bpscn_ctx() - Get the BPSCN CTX from BPSCN message
3486 + *
3487 + * Return the BPSCN context.
3488 + */
3489 +uint64_t qbman_result_bpscn_ctx(const struct dpaa2_dq *);
3490 +
3491 +/* Parsing CGCU */
3492 +/**
3493 + * qbman_result_cgcu_cgid() - Check CGCU resouce id, i.e. cgid
3494 + *
3495 + * Return the CGCU resource id.
3496 + */
3497 +uint16_t qbman_result_cgcu_cgid(const struct dpaa2_dq *);
3498 +/**
3499 + * qbman_result_cgcu_icnt() - Get the I_CNT from CGCU
3500 + *
3501 + * Return instantaneous count in the CGCU notification.
3502 + */
3503 +uint64_t qbman_result_cgcu_icnt(const struct dpaa2_dq *);
3504 +
3505 + /************/
3506 + /* Enqueues */
3507 + /************/
3508 +/**
3509 + * struct qbman_eq_desc - structure of enqueue descriptor
3510 + */
3511 +struct qbman_eq_desc {
3512 + uint32_t dont_manipulate_directly[8];
3513 +};
3514 +
3515 +/**
3516 + * struct qbman_eq_response - structure of enqueue response
3517 + */
3518 +struct qbman_eq_response {
3519 + uint32_t dont_manipulate_directly[16];
3520 +};
3521 +
3522 +/**
3523 + * qbman_eq_desc_clear() - Clear the contents of a descriptor to
3524 + * default/starting state.
3525 + */
3526 +void qbman_eq_desc_clear(struct qbman_eq_desc *);
3527 +
3528 +/* Exactly one of the following descriptor "actions" should be set. (Calling
3529 + * any one of these will replace the effect of any prior call to one of these.)
3530 + * - enqueue without order-restoration
3531 + * - enqueue with order-restoration
3532 + * - fill a hole in the order-restoration sequence, without any enqueue
3533 + * - advance NESN (Next Expected Sequence Number), without any enqueue
3534 + * 'respond_success' indicates whether an enqueue response should be DMA'd
3535 + * after success (otherwise a response is DMA'd only after failure).
3536 + * 'incomplete' indicates that other fragments of the same 'seqnum' are yet to
3537 + * be enqueued.
3538 + */
3539 +/**
3540 + * qbman_eq_desc_set_no_orp() - Set enqueue descriptor without orp
3541 + * @d: the enqueue descriptor.
3542 + * @response_success: 1 = enqueue with response always; 0 = enqueue with
3543 + * rejections returned on a FQ.
3544 + */
3545 +void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success);
3546 +
3547 +/**
3548 + * qbman_eq_desc_set_orp() - Set order-resotration in the enqueue descriptor
3549 + * @d: the enqueue descriptor.
3550 + * @response_success: 1 = enqueue with response always; 0 = enqueue with
3551 + * rejections returned on a FQ.
3552 + * @opr_id: the order point record id.
3553 + * @seqnum: the order restoration sequence number.
3554 + * @incomplete: indiates whether this is the last fragments using the same
3555 + * sequeue number.
3556 + */
3557 +void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
3558 + uint32_t opr_id, uint32_t seqnum, int incomplete);
3559 +
3560 +/**
3561 + * qbman_eq_desc_set_orp_hole() - fill a hole in the order-restoration sequence
3562 + * without any enqueue
3563 + * @d: the enqueue descriptor.
3564 + * @opr_id: the order point record id.
3565 + * @seqnum: the order restoration sequence number.
3566 + */
3567 +void qbman_eq_desc_set_orp_hole(struct qbman_eq_desc *d, uint32_t opr_id,
3568 + uint32_t seqnum);
3569 +
3570 +/**
3571 + * qbman_eq_desc_set_orp_nesn() - advance NESN (Next Expected Sequence Number)
3572 + * without any enqueue
3573 + * @d: the enqueue descriptor.
3574 + * @opr_id: the order point record id.
3575 + * @seqnum: the order restoration sequence number.
3576 + */
3577 +void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint32_t opr_id,
3578 + uint32_t seqnum);
3579 +
3580 +/**
3581 + * qbman_eq_desc_set_response() - Set the enqueue response info.
3582 + * @d: the enqueue descriptor
3583 + * @storage_phys: the physical address of the enqueue response in memory.
3584 + * @stash: indicate that the write allocation enabled or not.
3585 + *
3586 + * In the case where an enqueue response is DMA'd, this determines where that
3587 + * response should go. (The physical/DMA address is given for hardware's
3588 + * benefit, but software should interpret it as a "struct qbman_eq_response"
3589 + * data structure.) 'stash' controls whether or not the write to main-memory
3590 + * expresses a cache-warming attribute.
3591 + */
3592 +void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
3593 + dma_addr_t storage_phys,
3594 + int stash);
3595 +/**
3596 + * qbman_eq_desc_set_token() - Set token for the enqueue command
3597 + * @d: the enqueue descriptor
3598 + * @token: the token to be set.
3599 + *
3600 + * token is the value that shows up in an enqueue response that can be used to
3601 + * detect when the results have been published. The easiest technique is to zero
3602 + * result "storage" before issuing an enqueue, and use any non-zero 'token'
3603 + * value.
3604 + */
3605 +void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token);
3606 +
3607 +/**
3608 + * qbman_eq_desc_set_fq()
3609 + * qbman_eq_desc_set_qd() - Set eithe FQ or Queuing Destination for the enqueue
3610 + * command.
3611 + * @d: the enqueue descriptor
3612 + * @fqid: the id of the frame queue to be enqueued.
3613 + * @qdid: the id of the queuing destination to be enqueued.
3614 + * @qd_bin: the queuing destination bin
3615 + * @qd_prio: the queuing destination priority.
3616 + *
3617 + * Exactly one of the following descriptor "targets" should be set. (Calling any
3618 + * one of these will replace the effect of any prior call to one of these.)
3619 + * - enqueue to a frame queue
3620 + * - enqueue to a queuing destination
3621 + * Note, that none of these will have any affect if the "action" type has been
3622 + * set to "orp_hole" or "orp_nesn".
3623 + */
3624 +void qbman_eq_desc_set_fq(struct qbman_eq_desc *, uint32_t fqid);
3625 +void qbman_eq_desc_set_qd(struct qbman_eq_desc *, uint32_t qdid,
3626 + uint32_t qd_bin, uint32_t qd_prio);
3627 +
3628 +/**
3629 + * qbman_eq_desc_set_eqdi() - enable/disable EQDI interrupt
3630 + * @d: the enqueue descriptor
3631 + * @enable: boolean to enable/disable EQDI
3632 + *
3633 + * Determines whether or not the portal's EQDI interrupt source should be
3634 + * asserted after the enqueue command is completed.
3635 + */
3636 +void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *, int enable);
3637 +
3638 +/**
3639 + * qbman_eq_desc_set_dca() - Set DCA mode in the enqueue command.
3640 + * @d: the enqueue descriptor.
3641 + * @enable: enabled/disable DCA mode.
3642 + * @dqrr_idx: DCAP_CI, the DCAP consumer index.
3643 + * @park: determine the whether park the FQ or not
3644 + *
3645 + * Determines whether or not a portal DQRR entry should be consumed once the
3646 + * enqueue command is completed. (And if so, and the DQRR entry corresponds
3647 + * to a held-active (order-preserving) FQ, whether the FQ should be parked
3648 + * instead of being rescheduled.)
3649 + */
3650 +void qbman_eq_desc_set_dca(struct qbman_eq_desc *, int enable,
3651 + uint32_t dqrr_idx, int park);
3652 +
3653 +/**
3654 + * qbman_swp_enqueue() - Issue an enqueue command.
3655 + * @s: the software portal used for enqueue.
3656 + * @d: the enqueue descriptor.
3657 + * @fd: the frame descriptor to be enqueued.
3658 + *
3659 + * Please note that 'fd' should only be NULL if the "action" of the
3660 + * descriptor is "orp_hole" or "orp_nesn".
3661 + *
3662 + * Return 0 for successful enqueue, -EBUSY if the EQCR is not ready.
3663 + */
3664 +int qbman_swp_enqueue(struct qbman_swp *, const struct qbman_eq_desc *,
3665 + const struct qbman_fd *fd);
3666 +
3667 +/**
3668 + * qbman_swp_enqueue_thresh() - Set the threshold for EQRI interrupt.
3669 + *
3670 + * An EQRI interrupt can be generated when the fill-level of EQCR falls below
3671 + * the 'thresh' value set here. Setting thresh==0 (the default) disables.
3672 + */
3673 +int qbman_swp_enqueue_thresh(struct qbman_swp *, unsigned int thresh);
3674 +
3675 + /*******************/
3676 + /* Buffer releases */
3677 + /*******************/
3678 +/**
3679 + * struct qbman_release_desc - The structure for buffer release descriptor
3680 + */
3681 +struct qbman_release_desc {
3682 + uint32_t dont_manipulate_directly[1];
3683 +};
3684 +
3685 +/**
3686 + * qbman_release_desc_clear() - Clear the contents of a descriptor to
3687 + * default/starting state.
3688 + */
3689 +void qbman_release_desc_clear(struct qbman_release_desc *);
3690 +
3691 +/**
3692 + * qbman_release_desc_set_bpid() - Set the ID of the buffer pool to release to
3693 + */
3694 +void qbman_release_desc_set_bpid(struct qbman_release_desc *, uint32_t bpid);
3695 +
3696 +/**
3697 + * qbman_release_desc_set_rcdi() - Determines whether or not the portal's RCDI
3698 + * interrupt source should be asserted after the release command is completed.
3699 + */
3700 +void qbman_release_desc_set_rcdi(struct qbman_release_desc *, int enable);
3701 +
3702 +/**
3703 + * qbman_swp_release() - Issue a buffer release command.
3704 + * @s: the software portal object.
3705 + * @d: the release descriptor.
3706 + * @buffers: a pointer pointing to the buffer address to be released.
3707 + * @num_buffers: number of buffers to be released, must be less than 8.
3708 + *
3709 + * Return 0 for success, -EBUSY if the release command ring is not ready.
3710 + */
3711 +int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d,
3712 + const uint64_t *buffers, unsigned int num_buffers);
3713 +
3714 + /*******************/
3715 + /* Buffer acquires */
3716 + /*******************/
3717 +
3718 +/**
3719 + * qbman_swp_acquire() - Issue a buffer acquire command.
3720 + * @s: the software portal object.
3721 + * @bpid: the buffer pool index.
3722 + * @buffers: a pointer pointing to the acquired buffer address|es.
3723 + * @num_buffers: number of buffers to be acquired, must be less than 8.
3724 + *
3725 + * Return 0 for success, or negative error code if the acquire command
3726 + * fails.
3727 + */
3728 +int qbman_swp_acquire(struct qbman_swp *, uint32_t bpid, uint64_t *buffers,
3729 + unsigned int num_buffers);
3730 +
3731 + /*****************/
3732 + /* FQ management */
3733 + /*****************/
3734 +
3735 +/**
3736 + * qbman_swp_fq_schedule() - Move the fq to the scheduled state.
3737 + * @s: the software portal object.
3738 + * @fqid: the index of frame queue to be scheduled.
3739 + *
3740 + * There are a couple of different ways that a FQ can end up parked state,
3741 + * This schedules it.
3742 + *
3743 + * Return 0 for success, or negative error code for failure.
3744 + */
3745 +int qbman_swp_fq_schedule(struct qbman_swp *s, uint32_t fqid);
3746 +
3747 +/**
3748 + * qbman_swp_fq_force() - Force the FQ to fully scheduled state.
3749 + * @s: the software portal object.
3750 + * @fqid: the index of frame queue to be forced.
3751 + *
3752 + * Force eligible will force a tentatively-scheduled FQ to be fully-scheduled
3753 + * and thus be available for selection by any channel-dequeuing behaviour (push
3754 + * or pull). If the FQ is subsequently "dequeued" from the channel and is still
3755 + * empty at the time this happens, the resulting dq_entry will have no FD.
3756 + * (qbman_result_DQ_fd() will return NULL.)
3757 + *
3758 + * Return 0 for success, or negative error code for failure.
3759 + */
3760 +int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid);
3761 +
3762 +/**
3763 + * qbman_swp_fq_xon()
3764 + * qbman_swp_fq_xoff() - XON/XOFF the frame queue.
3765 + * @s: the software portal object.
3766 + * @fqid: the index of frame queue.
3767 + *
3768 + * These functions change the FQ flow-control stuff between XON/XOFF. (The
3769 + * default is XON.) This setting doesn't affect enqueues to the FQ, just
3770 + * dequeues. XOFF FQs will remain in the tenatively-scheduled state, even when
3771 + * non-empty, meaning they won't be selected for scheduled dequeuing. If a FQ is
3772 + * changed to XOFF after it had already become truly-scheduled to a channel, and
3773 + * a pull dequeue of that channel occurs that selects that FQ for dequeuing,
3774 + * then the resulting dq_entry will have no FD. (qbman_result_DQ_fd() will
3775 + * return NULL.)
3776 + *
3777 + * Return 0 for success, or negative error code for failure.
3778 + */
3779 +int qbman_swp_fq_xon(struct qbman_swp *s, uint32_t fqid);
3780 +int qbman_swp_fq_xoff(struct qbman_swp *s, uint32_t fqid);
3781 +
3782 + /**********************/
3783 + /* Channel management */
3784 + /**********************/
3785 +
3786 +/* If the user has been allocated a channel object that is going to generate
3787 + * CDANs to another channel, then these functions will be necessary.
3788 + * CDAN-enabled channels only generate a single CDAN notification, after which
3789 + * it they need to be reenabled before they'll generate another. (The idea is
3790 + * that pull dequeuing will occur in reaction to the CDAN, followed by a
3791 + * reenable step.) Each function generates a distinct command to hardware, so a
3792 + * combination function is provided if the user wishes to modify the "context"
3793 + * (which shows up in each CDAN message) each time they reenable, as a single
3794 + * command to hardware. */
3795 +/**
3796 + * qbman_swp_CDAN_set_context() - Set CDAN context
3797 + * @s: the software portal object.
3798 + * @channelid: the channel index.
3799 + * @ctx: the context to be set in CDAN.
3800 + *
3801 + * Return 0 for success, or negative error code for failure.
3802 + */
3803 +int qbman_swp_CDAN_set_context(struct qbman_swp *, uint16_t channelid,
3804 + uint64_t ctx);
3805 +
3806 +/**
3807 + * qbman_swp_CDAN_enable() - Enable CDAN for the channel.
3808 + * @s: the software portal object.
3809 + * @channelid: the index of the channel to generate CDAN.
3810 + *
3811 + * Return 0 for success, or negative error code for failure.
3812 + */
3813 +int qbman_swp_CDAN_enable(struct qbman_swp *, uint16_t channelid);
3814 +
3815 +/**
3816 + * qbman_swp_CDAN_disable() - disable CDAN for the channel.
3817 + * @s: the software portal object.
3818 + * @channelid: the index of the channel to generate CDAN.
3819 + *
3820 + * Return 0 for success, or negative error code for failure.
3821 + */
3822 +int qbman_swp_CDAN_disable(struct qbman_swp *, uint16_t channelid);
3823 +
3824 +/**
3825 + * qbman_swp_CDAN_set_context_enable() - Set CDAN contest and enable CDAN
3826 + * @s: the software portal object.
3827 + * @channelid: the index of the channel to generate CDAN.
3828 + * @ctx: the context set in CDAN.
3829 + *
3830 + * Return 0 for success, or negative error code for failure.
3831 + */
3832 +int qbman_swp_CDAN_set_context_enable(struct qbman_swp *, uint16_t channelid,
3833 + uint64_t ctx);
3834 +
3835 +#endif /* !_FSL_QBMAN_PORTAL_H */
3836 --- /dev/null
3837 +++ b/drivers/staging/fsl-mc/bus/dpio/qbman_debug.c
3838 @@ -0,0 +1,846 @@
3839 +/* Copyright (C) 2015 Freescale Semiconductor, Inc.
3840 + *
3841 + * Redistribution and use in source and binary forms, with or without
3842 + * modification, are permitted provided that the following conditions are met:
3843 + * * Redistributions of source code must retain the above copyright
3844 + * notice, this list of conditions and the following disclaimer.
3845 + * * Redistributions in binary form must reproduce the above copyright
3846 + * notice, this list of conditions and the following disclaimer in the
3847 + * documentation and/or other materials provided with the distribution.
3848 + * * Neither the name of Freescale Semiconductor nor the
3849 + * names of its contributors may be used to endorse or promote products
3850 + * derived from this software without specific prior written permission.
3851 + *
3852 + *
3853 + * ALTERNATIVELY, this software may be distributed under the terms of the
3854 + * GNU General Public License ("GPL") as published by the Free Software
3855 + * Foundation, either version 2 of that License or (at your option) any
3856 + * later version.
3857 + *
3858 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
3859 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
3860 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
3861 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
3862 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
3863 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
3864 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
3865 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
3866 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
3867 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
3868 + */
3869 +
3870 +#include "qbman_portal.h"
3871 +#include "qbman_debug.h"
3872 +#include "fsl_qbman_portal.h"
3873 +
3874 +/* QBMan portal management command code */
3875 +#define QBMAN_BP_QUERY 0x32
3876 +#define QBMAN_FQ_QUERY 0x44
3877 +#define QBMAN_FQ_QUERY_NP 0x45
3878 +#define QBMAN_CGR_QUERY 0x51
3879 +#define QBMAN_WRED_QUERY 0x54
3880 +#define QBMAN_CGR_STAT_QUERY 0x55
3881 +#define QBMAN_CGR_STAT_QUERY_CLR 0x56
3882 +
3883 +enum qbman_attr_usage_e {
3884 + qbman_attr_usage_fq,
3885 + qbman_attr_usage_bpool,
3886 + qbman_attr_usage_cgr,
3887 +};
3888 +
3889 +struct int_qbman_attr {
3890 + uint32_t words[32];
3891 + enum qbman_attr_usage_e usage;
3892 +};
3893 +
3894 +#define attr_type_set(a, e) \
3895 +{ \
3896 + struct qbman_attr *__attr = a; \
3897 + enum qbman_attr_usage_e __usage = e; \
3898 + ((struct int_qbman_attr *)__attr)->usage = __usage; \
3899 +}
3900 +
3901 +#define ATTR32(d) (&(d)->dont_manipulate_directly[0])
3902 +#define ATTR32_1(d) (&(d)->dont_manipulate_directly[16])
3903 +
3904 +static struct qb_attr_code code_bp_bpid = QB_CODE(0, 16, 16);
3905 +static struct qb_attr_code code_bp_bdi = QB_CODE(1, 16, 1);
3906 +static struct qb_attr_code code_bp_va = QB_CODE(1, 17, 1);
3907 +static struct qb_attr_code code_bp_wae = QB_CODE(1, 18, 1);
3908 +static struct qb_attr_code code_bp_swdet = QB_CODE(4, 0, 16);
3909 +static struct qb_attr_code code_bp_swdxt = QB_CODE(4, 16, 16);
3910 +static struct qb_attr_code code_bp_hwdet = QB_CODE(5, 0, 16);
3911 +static struct qb_attr_code code_bp_hwdxt = QB_CODE(5, 16, 16);
3912 +static struct qb_attr_code code_bp_swset = QB_CODE(6, 0, 16);
3913 +static struct qb_attr_code code_bp_swsxt = QB_CODE(6, 16, 16);
3914 +static struct qb_attr_code code_bp_vbpid = QB_CODE(7, 0, 14);
3915 +static struct qb_attr_code code_bp_icid = QB_CODE(7, 16, 15);
3916 +static struct qb_attr_code code_bp_pl = QB_CODE(7, 31, 1);
3917 +static struct qb_attr_code code_bp_bpscn_addr_lo = QB_CODE(8, 0, 32);
3918 +static struct qb_attr_code code_bp_bpscn_addr_hi = QB_CODE(9, 0, 32);
3919 +static struct qb_attr_code code_bp_bpscn_ctx_lo = QB_CODE(10, 0, 32);
3920 +static struct qb_attr_code code_bp_bpscn_ctx_hi = QB_CODE(11, 0, 32);
3921 +static struct qb_attr_code code_bp_hw_targ = QB_CODE(12, 0, 16);
3922 +static struct qb_attr_code code_bp_state = QB_CODE(1, 24, 3);
3923 +static struct qb_attr_code code_bp_fill = QB_CODE(2, 0, 32);
3924 +static struct qb_attr_code code_bp_hdptr = QB_CODE(3, 0, 32);
3925 +static struct qb_attr_code code_bp_sdcnt = QB_CODE(13, 0, 8);
3926 +static struct qb_attr_code code_bp_hdcnt = QB_CODE(13, 1, 8);
3927 +static struct qb_attr_code code_bp_sscnt = QB_CODE(13, 2, 8);
3928 +
3929 +void qbman_bp_attr_clear(struct qbman_attr *a)
3930 +{
3931 + memset(a, 0, sizeof(*a));
3932 + attr_type_set(a, qbman_attr_usage_bpool);
3933 +}
3934 +
3935 +int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
3936 + struct qbman_attr *a)
3937 +{
3938 + uint32_t *p;
3939 + uint32_t verb, rslt;
3940 + uint32_t *attr = ATTR32(a);
3941 +
3942 + qbman_bp_attr_clear(a);
3943 +
3944 + /* Start the management command */
3945 + p = qbman_swp_mc_start(s);
3946 + if (!p)
3947 + return -EBUSY;
3948 +
3949 + /* Encode the caller-provided attributes */
3950 + qb_attr_code_encode(&code_bp_bpid, p, bpid);
3951 +
3952 + /* Complete the management command */
3953 + p = qbman_swp_mc_complete(s, p, p[0] | QBMAN_BP_QUERY);
3954 +
3955 + /* Decode the outcome */
3956 + verb = qb_attr_code_decode(&code_generic_verb, p);
3957 + rslt = qb_attr_code_decode(&code_generic_rslt, p);
3958 + BUG_ON(verb != QBMAN_BP_QUERY);
3959 +
3960 + /* Determine success or failure */
3961 + if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
3962 + pr_err("Query of BPID 0x%x failed, code=0x%02x\n", bpid, rslt);
3963 + return -EIO;
3964 + }
3965 +
3966 + /* For the query, word[0] of the result contains only the
3967 + * verb/rslt fields, so skip word[0].
3968 + */
3969 + word_copy(&attr[1], &p[1], 15);
3970 + return 0;
3971 +}
3972 +
3973 +void qbman_bp_attr_get_bdi(struct qbman_attr *a, int *bdi, int *va, int *wae)
3974 +{
3975 + uint32_t *p = ATTR32(a);
3976 +
3977 + *bdi = !!qb_attr_code_decode(&code_bp_bdi, p);
3978 + *va = !!qb_attr_code_decode(&code_bp_va, p);
3979 + *wae = !!qb_attr_code_decode(&code_bp_wae, p);
3980 +}
3981 +
3982 +static uint32_t qbman_bp_thresh_to_value(uint32_t val)
3983 +{
3984 + return (val & 0xff) << ((val & 0xf00) >> 8);
3985 +}
3986 +
3987 +void qbman_bp_attr_get_swdet(struct qbman_attr *a, uint32_t *swdet)
3988 +{
3989 + uint32_t *p = ATTR32(a);
3990 +
3991 + *swdet = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_swdet,
3992 + p));
3993 +}
3994 +void qbman_bp_attr_get_swdxt(struct qbman_attr *a, uint32_t *swdxt)
3995 +{
3996 + uint32_t *p = ATTR32(a);
3997 +
3998 + *swdxt = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_swdxt,
3999 + p));
4000 +}
4001 +void qbman_bp_attr_get_hwdet(struct qbman_attr *a, uint32_t *hwdet)
4002 +{
4003 + uint32_t *p = ATTR32(a);
4004 +
4005 + *hwdet = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_hwdet,
4006 + p));
4007 +}
4008 +void qbman_bp_attr_get_hwdxt(struct qbman_attr *a, uint32_t *hwdxt)
4009 +{
4010 + uint32_t *p = ATTR32(a);
4011 +
4012 + *hwdxt = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_hwdxt,
4013 + p));
4014 +}
4015 +
4016 +void qbman_bp_attr_get_swset(struct qbman_attr *a, uint32_t *swset)
4017 +{
4018 + uint32_t *p = ATTR32(a);
4019 +
4020 + *swset = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_swset,
4021 + p));
4022 +}
4023 +
4024 +void qbman_bp_attr_get_swsxt(struct qbman_attr *a, uint32_t *swsxt)
4025 +{
4026 + uint32_t *p = ATTR32(a);
4027 +
4028 + *swsxt = qbman_bp_thresh_to_value(qb_attr_code_decode(&code_bp_swsxt,
4029 + p));
4030 +}
4031 +
4032 +void qbman_bp_attr_get_vbpid(struct qbman_attr *a, uint32_t *vbpid)
4033 +{
4034 + uint32_t *p = ATTR32(a);
4035 +
4036 + *vbpid = qb_attr_code_decode(&code_bp_vbpid, p);
4037 +}
4038 +
4039 +void qbman_bp_attr_get_icid(struct qbman_attr *a, uint32_t *icid, int *pl)
4040 +{
4041 + uint32_t *p = ATTR32(a);
4042 +
4043 + *icid = qb_attr_code_decode(&code_bp_icid, p);
4044 + *pl = !!qb_attr_code_decode(&code_bp_pl, p);
4045 +}
4046 +
4047 +void qbman_bp_attr_get_bpscn_addr(struct qbman_attr *a, uint64_t *bpscn_addr)
4048 +{
4049 + uint32_t *p = ATTR32(a);
4050 +
4051 + *bpscn_addr = ((uint64_t)qb_attr_code_decode(&code_bp_bpscn_addr_hi,
4052 + p) << 32) |
4053 + (uint64_t)qb_attr_code_decode(&code_bp_bpscn_addr_lo,
4054 + p);
4055 +}
4056 +
4057 +void qbman_bp_attr_get_bpscn_ctx(struct qbman_attr *a, uint64_t *bpscn_ctx)
4058 +{
4059 + uint32_t *p = ATTR32(a);
4060 +
4061 + *bpscn_ctx = ((uint64_t)qb_attr_code_decode(&code_bp_bpscn_ctx_hi, p)
4062 + << 32) |
4063 + (uint64_t)qb_attr_code_decode(&code_bp_bpscn_ctx_lo,
4064 + p);
4065 +}
4066 +
4067 +void qbman_bp_attr_get_hw_targ(struct qbman_attr *a, uint32_t *hw_targ)
4068 +{
4069 + uint32_t *p = ATTR32(a);
4070 +
4071 + *hw_targ = qb_attr_code_decode(&code_bp_hw_targ, p);
4072 +}
4073 +
4074 +int qbman_bp_info_has_free_bufs(struct qbman_attr *a)
4075 +{
4076 + uint32_t *p = ATTR32(a);
4077 +
4078 + return !(int)(qb_attr_code_decode(&code_bp_state, p) & 0x1);
4079 +}
4080 +
4081 +int qbman_bp_info_is_depleted(struct qbman_attr *a)
4082 +{
4083 + uint32_t *p = ATTR32(a);
4084 +
4085 + return (int)(qb_attr_code_decode(&code_bp_state, p) & 0x2);
4086 +}
4087 +
4088 +int qbman_bp_info_is_surplus(struct qbman_attr *a)
4089 +{
4090 + uint32_t *p = ATTR32(a);
4091 +
4092 + return (int)(qb_attr_code_decode(&code_bp_state, p) & 0x4);
4093 +}
4094 +
4095 +uint32_t qbman_bp_info_num_free_bufs(struct qbman_attr *a)
4096 +{
4097 + uint32_t *p = ATTR32(a);
4098 +
4099 + return qb_attr_code_decode(&code_bp_fill, p);
4100 +}
4101 +
4102 +uint32_t qbman_bp_info_hdptr(struct qbman_attr *a)
4103 +{
4104 + uint32_t *p = ATTR32(a);
4105 +
4106 + return qb_attr_code_decode(&code_bp_hdptr, p);
4107 +}
4108 +
4109 +uint32_t qbman_bp_info_sdcnt(struct qbman_attr *a)
4110 +{
4111 + uint32_t *p = ATTR32(a);
4112 +
4113 + return qb_attr_code_decode(&code_bp_sdcnt, p);
4114 +}
4115 +
4116 +uint32_t qbman_bp_info_hdcnt(struct qbman_attr *a)
4117 +{
4118 + uint32_t *p = ATTR32(a);
4119 +
4120 + return qb_attr_code_decode(&code_bp_hdcnt, p);
4121 +}
4122 +
4123 +uint32_t qbman_bp_info_sscnt(struct qbman_attr *a)
4124 +{
4125 + uint32_t *p = ATTR32(a);
4126 +
4127 + return qb_attr_code_decode(&code_bp_sscnt, p);
4128 +}
4129 +
4130 +static struct qb_attr_code code_fq_fqid = QB_CODE(1, 0, 24);
4131 +static struct qb_attr_code code_fq_cgrid = QB_CODE(2, 16, 16);
4132 +static struct qb_attr_code code_fq_destwq = QB_CODE(3, 0, 15);
4133 +static struct qb_attr_code code_fq_fqctrl = QB_CODE(3, 24, 8);
4134 +static struct qb_attr_code code_fq_icscred = QB_CODE(4, 0, 15);
4135 +static struct qb_attr_code code_fq_tdthresh = QB_CODE(4, 16, 13);
4136 +static struct qb_attr_code code_fq_oa_len = QB_CODE(5, 0, 12);
4137 +static struct qb_attr_code code_fq_oa_ics = QB_CODE(5, 14, 1);
4138 +static struct qb_attr_code code_fq_oa_cgr = QB_CODE(5, 15, 1);
4139 +static struct qb_attr_code code_fq_mctl_bdi = QB_CODE(5, 24, 1);
4140 +static struct qb_attr_code code_fq_mctl_ff = QB_CODE(5, 25, 1);
4141 +static struct qb_attr_code code_fq_mctl_va = QB_CODE(5, 26, 1);
4142 +static struct qb_attr_code code_fq_mctl_ps = QB_CODE(5, 27, 1);
4143 +static struct qb_attr_code code_fq_ctx_lower32 = QB_CODE(6, 0, 32);
4144 +static struct qb_attr_code code_fq_ctx_upper32 = QB_CODE(7, 0, 32);
4145 +static struct qb_attr_code code_fq_icid = QB_CODE(8, 0, 15);
4146 +static struct qb_attr_code code_fq_pl = QB_CODE(8, 15, 1);
4147 +static struct qb_attr_code code_fq_vfqid = QB_CODE(9, 0, 24);
4148 +static struct qb_attr_code code_fq_erfqid = QB_CODE(10, 0, 24);
4149 +
4150 +void qbman_fq_attr_clear(struct qbman_attr *a)
4151 +{
4152 + memset(a, 0, sizeof(*a));
4153 + attr_type_set(a, qbman_attr_usage_fq);
4154 +}
4155 +
4156 +/* FQ query function for programmable fields */
4157 +int qbman_fq_query(struct qbman_swp *s, uint32_t fqid, struct qbman_attr *desc)
4158 +{
4159 + uint32_t *p;
4160 + uint32_t verb, rslt;
4161 + uint32_t *d = ATTR32(desc);
4162 +
4163 + qbman_fq_attr_clear(desc);
4164 +
4165 + p = qbman_swp_mc_start(s);
4166 + if (!p)
4167 + return -EBUSY;
4168 + qb_attr_code_encode(&code_fq_fqid, p, fqid);
4169 + p = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY);
4170 +
4171 + /* Decode the outcome */
4172 + verb = qb_attr_code_decode(&code_generic_verb, p);
4173 + rslt = qb_attr_code_decode(&code_generic_rslt, p);
4174 + BUG_ON(verb != QBMAN_FQ_QUERY);
4175 +
4176 + /* Determine success or failure */
4177 + if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
4178 + pr_err("Query of FQID 0x%x failed, code=0x%02x\n",
4179 + fqid, rslt);
4180 + return -EIO;
4181 + }
4182 + /* For the configure, word[0] of the command contains only the WE-mask.
4183 + * For the query, word[0] of the result contains only the verb/rslt
4184 + * fields. Skip word[0] in the latter case. */
4185 + word_copy(&d[1], &p[1], 15);
4186 + return 0;
4187 +}
4188 +
4189 +void qbman_fq_attr_get_fqctrl(struct qbman_attr *d, uint32_t *fqctrl)
4190 +{
4191 + uint32_t *p = ATTR32(d);
4192 +
4193 + *fqctrl = qb_attr_code_decode(&code_fq_fqctrl, p);
4194 +}
4195 +
4196 +void qbman_fq_attr_get_cgrid(struct qbman_attr *d, uint32_t *cgrid)
4197 +{
4198 + uint32_t *p = ATTR32(d);
4199 +
4200 + *cgrid = qb_attr_code_decode(&code_fq_cgrid, p);
4201 +}
4202 +
4203 +void qbman_fq_attr_get_destwq(struct qbman_attr *d, uint32_t *destwq)
4204 +{
4205 + uint32_t *p = ATTR32(d);
4206 +
4207 + *destwq = qb_attr_code_decode(&code_fq_destwq, p);
4208 +}
4209 +
4210 +void qbman_fq_attr_get_icscred(struct qbman_attr *d, uint32_t *icscred)
4211 +{
4212 + uint32_t *p = ATTR32(d);
4213 +
4214 + *icscred = qb_attr_code_decode(&code_fq_icscred, p);
4215 +}
4216 +
4217 +static struct qb_attr_code code_tdthresh_exp = QB_CODE(0, 0, 5);
4218 +static struct qb_attr_code code_tdthresh_mant = QB_CODE(0, 5, 8);
4219 +static uint32_t qbman_thresh_to_value(uint32_t val)
4220 +{
4221 + uint32_t m, e;
4222 +
4223 + m = qb_attr_code_decode(&code_tdthresh_mant, &val);
4224 + e = qb_attr_code_decode(&code_tdthresh_exp, &val);
4225 + return m << e;
4226 +}
4227 +
4228 +void qbman_fq_attr_get_tdthresh(struct qbman_attr *d, uint32_t *tdthresh)
4229 +{
4230 + uint32_t *p = ATTR32(d);
4231 +
4232 + *tdthresh = qbman_thresh_to_value(qb_attr_code_decode(&code_fq_tdthresh,
4233 + p));
4234 +}
4235 +
4236 +void qbman_fq_attr_get_oa(struct qbman_attr *d,
4237 + int *oa_ics, int *oa_cgr, int32_t *oa_len)
4238 +{
4239 + uint32_t *p = ATTR32(d);
4240 +
4241 + *oa_ics = !!qb_attr_code_decode(&code_fq_oa_ics, p);
4242 + *oa_cgr = !!qb_attr_code_decode(&code_fq_oa_cgr, p);
4243 + *oa_len = qb_attr_code_makesigned(&code_fq_oa_len,
4244 + qb_attr_code_decode(&code_fq_oa_len, p));
4245 +}
4246 +
4247 +void qbman_fq_attr_get_mctl(struct qbman_attr *d,
4248 + int *bdi, int *ff, int *va, int *ps)
4249 +{
4250 + uint32_t *p = ATTR32(d);
4251 +
4252 + *bdi = !!qb_attr_code_decode(&code_fq_mctl_bdi, p);
4253 + *ff = !!qb_attr_code_decode(&code_fq_mctl_ff, p);
4254 + *va = !!qb_attr_code_decode(&code_fq_mctl_va, p);
4255 + *ps = !!qb_attr_code_decode(&code_fq_mctl_ps, p);
4256 +}
4257 +
4258 +void qbman_fq_attr_get_ctx(struct qbman_attr *d, uint32_t *hi, uint32_t *lo)
4259 +{
4260 + uint32_t *p = ATTR32(d);
4261 +
4262 + *hi = qb_attr_code_decode(&code_fq_ctx_upper32, p);
4263 + *lo = qb_attr_code_decode(&code_fq_ctx_lower32, p);
4264 +}
4265 +
4266 +void qbman_fq_attr_get_icid(struct qbman_attr *d, uint32_t *icid, int *pl)
4267 +{
4268 + uint32_t *p = ATTR32(d);
4269 +
4270 + *icid = qb_attr_code_decode(&code_fq_icid, p);
4271 + *pl = !!qb_attr_code_decode(&code_fq_pl, p);
4272 +}
4273 +
4274 +void qbman_fq_attr_get_vfqid(struct qbman_attr *d, uint32_t *vfqid)
4275 +{
4276 + uint32_t *p = ATTR32(d);
4277 +
4278 + *vfqid = qb_attr_code_decode(&code_fq_vfqid, p);
4279 +}
4280 +
4281 +void qbman_fq_attr_get_erfqid(struct qbman_attr *d, uint32_t *erfqid)
4282 +{
4283 + uint32_t *p = ATTR32(d);
4284 +
4285 + *erfqid = qb_attr_code_decode(&code_fq_erfqid, p);
4286 +}
4287 +
4288 +/* Query FQ Non-Programmalbe Fields */
4289 +static struct qb_attr_code code_fq_np_state = QB_CODE(0, 16, 3);
4290 +static struct qb_attr_code code_fq_np_fe = QB_CODE(0, 19, 1);
4291 +static struct qb_attr_code code_fq_np_x = QB_CODE(0, 20, 1);
4292 +static struct qb_attr_code code_fq_np_r = QB_CODE(0, 21, 1);
4293 +static struct qb_attr_code code_fq_np_oe = QB_CODE(0, 22, 1);
4294 +static struct qb_attr_code code_fq_np_frm_cnt = QB_CODE(6, 0, 24);
4295 +static struct qb_attr_code code_fq_np_byte_cnt = QB_CODE(7, 0, 32);
4296 +
4297 +int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
4298 + struct qbman_attr *state)
4299 +{
4300 + uint32_t *p;
4301 + uint32_t verb, rslt;
4302 + uint32_t *d = ATTR32(state);
4303 +
4304 + qbman_fq_attr_clear(state);
4305 +
4306 + p = qbman_swp_mc_start(s);
4307 + if (!p)
4308 + return -EBUSY;
4309 + qb_attr_code_encode(&code_fq_fqid, p, fqid);
4310 + p = qbman_swp_mc_complete(s, p, QBMAN_FQ_QUERY_NP);
4311 +
4312 + /* Decode the outcome */
4313 + verb = qb_attr_code_decode(&code_generic_verb, p);
4314 + rslt = qb_attr_code_decode(&code_generic_rslt, p);
4315 + BUG_ON(verb != QBMAN_FQ_QUERY_NP);
4316 +
4317 + /* Determine success or failure */
4318 + if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
4319 + pr_err("Query NP fields of FQID 0x%x failed, code=0x%02x\n",
4320 + fqid, rslt);
4321 + return -EIO;
4322 + }
4323 + word_copy(&d[0], &p[0], 16);
4324 + return 0;
4325 +}
4326 +
4327 +uint32_t qbman_fq_state_schedstate(const struct qbman_attr *state)
4328 +{
4329 + const uint32_t *p = ATTR32(state);
4330 +
4331 + return qb_attr_code_decode(&code_fq_np_state, p);
4332 +}
4333 +
4334 +int qbman_fq_state_force_eligible(const struct qbman_attr *state)
4335 +{
4336 + const uint32_t *p = ATTR32(state);
4337 +
4338 + return !!qb_attr_code_decode(&code_fq_np_fe, p);
4339 +}
4340 +
4341 +int qbman_fq_state_xoff(const struct qbman_attr *state)
4342 +{
4343 + const uint32_t *p = ATTR32(state);
4344 +
4345 + return !!qb_attr_code_decode(&code_fq_np_x, p);
4346 +}
4347 +
4348 +int qbman_fq_state_retirement_pending(const struct qbman_attr *state)
4349 +{
4350 + const uint32_t *p = ATTR32(state);
4351 +
4352 + return !!qb_attr_code_decode(&code_fq_np_r, p);
4353 +}
4354 +
4355 +int qbman_fq_state_overflow_error(const struct qbman_attr *state)
4356 +{
4357 + const uint32_t *p = ATTR32(state);
4358 +
4359 + return !!qb_attr_code_decode(&code_fq_np_oe, p);
4360 +}
4361 +
4362 +uint32_t qbman_fq_state_frame_count(const struct qbman_attr *state)
4363 +{
4364 + const uint32_t *p = ATTR32(state);
4365 +
4366 + return qb_attr_code_decode(&code_fq_np_frm_cnt, p);
4367 +}
4368 +
4369 +uint32_t qbman_fq_state_byte_count(const struct qbman_attr *state)
4370 +{
4371 + const uint32_t *p = ATTR32(state);
4372 +
4373 + return qb_attr_code_decode(&code_fq_np_byte_cnt, p);
4374 +}
4375 +
4376 +/* Query CGR */
4377 +static struct qb_attr_code code_cgr_cgid = QB_CODE(0, 16, 16);
4378 +static struct qb_attr_code code_cgr_cscn_wq_en_enter = QB_CODE(2, 0, 1);
4379 +static struct qb_attr_code code_cgr_cscn_wq_en_exit = QB_CODE(2, 1, 1);
4380 +static struct qb_attr_code code_cgr_cscn_wq_icd = QB_CODE(2, 2, 1);
4381 +static struct qb_attr_code code_cgr_mode = QB_CODE(3, 16, 2);
4382 +static struct qb_attr_code code_cgr_rej_cnt_mode = QB_CODE(3, 18, 1);
4383 +static struct qb_attr_code code_cgr_cscn_bdi = QB_CODE(3, 19, 1);
4384 +static struct qb_attr_code code_cgr_cscn_wr_en_enter = QB_CODE(3, 24, 1);
4385 +static struct qb_attr_code code_cgr_cscn_wr_en_exit = QB_CODE(3, 25, 1);
4386 +static struct qb_attr_code code_cgr_cg_wr_ae = QB_CODE(3, 26, 1);
4387 +static struct qb_attr_code code_cgr_cscn_dcp_en = QB_CODE(3, 27, 1);
4388 +static struct qb_attr_code code_cgr_cg_wr_va = QB_CODE(3, 28, 1);
4389 +static struct qb_attr_code code_cgr_i_cnt_wr_en = QB_CODE(4, 0, 1);
4390 +static struct qb_attr_code code_cgr_i_cnt_wr_bnd = QB_CODE(4, 1, 5);
4391 +static struct qb_attr_code code_cgr_td_en = QB_CODE(4, 8, 1);
4392 +static struct qb_attr_code code_cgr_cs_thres = QB_CODE(4, 16, 13);
4393 +static struct qb_attr_code code_cgr_cs_thres_x = QB_CODE(5, 0, 13);
4394 +static struct qb_attr_code code_cgr_td_thres = QB_CODE(5, 16, 13);
4395 +static struct qb_attr_code code_cgr_cscn_tdcp = QB_CODE(6, 0, 16);
4396 +static struct qb_attr_code code_cgr_cscn_wqid = QB_CODE(6, 16, 16);
4397 +static struct qb_attr_code code_cgr_cscn_vcgid = QB_CODE(7, 0, 16);
4398 +static struct qb_attr_code code_cgr_cg_icid = QB_CODE(7, 16, 15);
4399 +static struct qb_attr_code code_cgr_cg_pl = QB_CODE(7, 31, 1);
4400 +static struct qb_attr_code code_cgr_cg_wr_addr_lo = QB_CODE(8, 0, 32);
4401 +static struct qb_attr_code code_cgr_cg_wr_addr_hi = QB_CODE(9, 0, 32);
4402 +static struct qb_attr_code code_cgr_cscn_ctx_lo = QB_CODE(10, 0, 32);
4403 +static struct qb_attr_code code_cgr_cscn_ctx_hi = QB_CODE(11, 0, 32);
4404 +
4405 +void qbman_cgr_attr_clear(struct qbman_attr *a)
4406 +{
4407 + memset(a, 0, sizeof(*a));
4408 + attr_type_set(a, qbman_attr_usage_cgr);
4409 +}
4410 +
4411 +int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid, struct qbman_attr *attr)
4412 +{
4413 + uint32_t *p;
4414 + uint32_t verb, rslt;
4415 + uint32_t *d[2];
4416 + int i;
4417 + uint32_t query_verb;
4418 +
4419 + d[0] = ATTR32(attr);
4420 + d[1] = ATTR32_1(attr);
4421 +
4422 + qbman_cgr_attr_clear(attr);
4423 +
4424 + for (i = 0; i < 2; i++) {
4425 + p = qbman_swp_mc_start(s);
4426 + if (!p)
4427 + return -EBUSY;
4428 + query_verb = i ? QBMAN_WRED_QUERY : QBMAN_CGR_QUERY;
4429 +
4430 + qb_attr_code_encode(&code_cgr_cgid, p, cgid);
4431 + p = qbman_swp_mc_complete(s, p, p[0] | query_verb);
4432 +
4433 + /* Decode the outcome */
4434 + verb = qb_attr_code_decode(&code_generic_verb, p);
4435 + rslt = qb_attr_code_decode(&code_generic_rslt, p);
4436 + BUG_ON(verb != query_verb);
4437 +
4438 + /* Determine success or failure */
4439 + if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
4440 + pr_err("Query CGID 0x%x failed,", cgid);
4441 + pr_err(" verb=0x%02x, code=0x%02x\n", verb, rslt);
4442 + return -EIO;
4443 + }
4444 + /* For the configure, word[0] of the command contains only the
4445 + * verb/cgid. For the query, word[0] of the result contains
4446 + * only the verb/rslt fields. Skip word[0] in the latter case.
4447 + */
4448 + word_copy(&d[i][1], &p[1], 15);
4449 + }
4450 + return 0;
4451 +}
4452 +
4453 +void qbman_cgr_attr_get_ctl1(struct qbman_attr *d, int *cscn_wq_en_enter,
4454 + int *cscn_wq_en_exit, int *cscn_wq_icd)
4455 + {
4456 + uint32_t *p = ATTR32(d);
4457 + *cscn_wq_en_enter = !!qb_attr_code_decode(&code_cgr_cscn_wq_en_enter,
4458 + p);
4459 + *cscn_wq_en_exit = !!qb_attr_code_decode(&code_cgr_cscn_wq_en_exit, p);
4460 + *cscn_wq_icd = !!qb_attr_code_decode(&code_cgr_cscn_wq_icd, p);
4461 +}
4462 +
4463 +void qbman_cgr_attr_get_mode(struct qbman_attr *d, uint32_t *mode,
4464 + int *rej_cnt_mode, int *cscn_bdi)
4465 +{
4466 + uint32_t *p = ATTR32(d);
4467 + *mode = qb_attr_code_decode(&code_cgr_mode, p);
4468 + *rej_cnt_mode = !!qb_attr_code_decode(&code_cgr_rej_cnt_mode, p);
4469 + *cscn_bdi = !!qb_attr_code_decode(&code_cgr_cscn_bdi, p);
4470 +}
4471 +
4472 +void qbman_cgr_attr_get_ctl2(struct qbman_attr *d, int *cscn_wr_en_enter,
4473 + int *cscn_wr_en_exit, int *cg_wr_ae,
4474 + int *cscn_dcp_en, int *cg_wr_va)
4475 +{
4476 + uint32_t *p = ATTR32(d);
4477 + *cscn_wr_en_enter = !!qb_attr_code_decode(&code_cgr_cscn_wr_en_enter,
4478 + p);
4479 + *cscn_wr_en_exit = !!qb_attr_code_decode(&code_cgr_cscn_wr_en_exit, p);
4480 + *cg_wr_ae = !!qb_attr_code_decode(&code_cgr_cg_wr_ae, p);
4481 + *cscn_dcp_en = !!qb_attr_code_decode(&code_cgr_cscn_dcp_en, p);
4482 + *cg_wr_va = !!qb_attr_code_decode(&code_cgr_cg_wr_va, p);
4483 +}
4484 +
4485 +void qbman_cgr_attr_get_iwc(struct qbman_attr *d, int *i_cnt_wr_en,
4486 + uint32_t *i_cnt_wr_bnd)
4487 +{
4488 + uint32_t *p = ATTR32(d);
4489 + *i_cnt_wr_en = !!qb_attr_code_decode(&code_cgr_i_cnt_wr_en, p);
4490 + *i_cnt_wr_bnd = qb_attr_code_decode(&code_cgr_i_cnt_wr_bnd, p);
4491 +}
4492 +
4493 +void qbman_cgr_attr_get_tdc(struct qbman_attr *d, int *td_en)
4494 +{
4495 + uint32_t *p = ATTR32(d);
4496 + *td_en = !!qb_attr_code_decode(&code_cgr_td_en, p);
4497 +}
4498 +
4499 +void qbman_cgr_attr_get_cs_thres(struct qbman_attr *d, uint32_t *cs_thres)
4500 +{
4501 + uint32_t *p = ATTR32(d);
4502 + *cs_thres = qbman_thresh_to_value(qb_attr_code_decode(
4503 + &code_cgr_cs_thres, p));
4504 +}
4505 +
4506 +void qbman_cgr_attr_get_cs_thres_x(struct qbman_attr *d,
4507 + uint32_t *cs_thres_x)
4508 +{
4509 + uint32_t *p = ATTR32(d);
4510 + *cs_thres_x = qbman_thresh_to_value(qb_attr_code_decode(
4511 + &code_cgr_cs_thres_x, p));
4512 +}
4513 +
4514 +void qbman_cgr_attr_get_td_thres(struct qbman_attr *d, uint32_t *td_thres)
4515 +{
4516 + uint32_t *p = ATTR32(d);
4517 + *td_thres = qbman_thresh_to_value(qb_attr_code_decode(
4518 + &code_cgr_td_thres, p));
4519 +}
4520 +
4521 +void qbman_cgr_attr_get_cscn_tdcp(struct qbman_attr *d, uint32_t *cscn_tdcp)
4522 +{
4523 + uint32_t *p = ATTR32(d);
4524 + *cscn_tdcp = qb_attr_code_decode(&code_cgr_cscn_tdcp, p);
4525 +}
4526 +
4527 +void qbman_cgr_attr_get_cscn_wqid(struct qbman_attr *d, uint32_t *cscn_wqid)
4528 +{
4529 + uint32_t *p = ATTR32(d);
4530 + *cscn_wqid = qb_attr_code_decode(&code_cgr_cscn_wqid, p);
4531 +}
4532 +
4533 +void qbman_cgr_attr_get_cscn_vcgid(struct qbman_attr *d,
4534 + uint32_t *cscn_vcgid)
4535 +{
4536 + uint32_t *p = ATTR32(d);
4537 + *cscn_vcgid = qb_attr_code_decode(&code_cgr_cscn_vcgid, p);
4538 +}
4539 +
4540 +void qbman_cgr_attr_get_cg_icid(struct qbman_attr *d, uint32_t *icid,
4541 + int *pl)
4542 +{
4543 + uint32_t *p = ATTR32(d);
4544 + *icid = qb_attr_code_decode(&code_cgr_cg_icid, p);
4545 + *pl = !!qb_attr_code_decode(&code_cgr_cg_pl, p);
4546 +}
4547 +
4548 +void qbman_cgr_attr_get_cg_wr_addr(struct qbman_attr *d,
4549 + uint64_t *cg_wr_addr)
4550 +{
4551 + uint32_t *p = ATTR32(d);
4552 + *cg_wr_addr = ((uint64_t)qb_attr_code_decode(&code_cgr_cg_wr_addr_hi,
4553 + p) << 32) |
4554 + (uint64_t)qb_attr_code_decode(&code_cgr_cg_wr_addr_lo,
4555 + p);
4556 +}
4557 +
4558 +void qbman_cgr_attr_get_cscn_ctx(struct qbman_attr *d, uint64_t *cscn_ctx)
4559 +{
4560 + uint32_t *p = ATTR32(d);
4561 + *cscn_ctx = ((uint64_t)qb_attr_code_decode(&code_cgr_cscn_ctx_hi, p)
4562 + << 32) |
4563 + (uint64_t)qb_attr_code_decode(&code_cgr_cscn_ctx_lo, p);
4564 +}
4565 +
4566 +#define WRED_EDP_WORD(n) (18 + n/4)
4567 +#define WRED_EDP_OFFSET(n) (8 * (n % 4))
4568 +#define WRED_PARM_DP_WORD(n) (n + 20)
4569 +#define WRED_WE_EDP(n) (16 + n * 2)
4570 +#define WRED_WE_PARM_DP(n) (17 + n * 2)
4571 +void qbman_cgr_attr_wred_get_edp(struct qbman_attr *d, uint32_t idx,
4572 + int *edp)
4573 +{
4574 + uint32_t *p = ATTR32(d);
4575 + struct qb_attr_code code_wred_edp = QB_CODE(WRED_EDP_WORD(idx),
4576 + WRED_EDP_OFFSET(idx), 8);
4577 + *edp = (int)qb_attr_code_decode(&code_wred_edp, p);
4578 +}
4579 +
4580 +void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
4581 + uint64_t *maxth, uint8_t *maxp)
4582 +{
4583 + uint8_t ma, mn, step_i, step_s, pn;
4584 +
4585 + ma = (uint8_t)(dp >> 24);
4586 + mn = (uint8_t)(dp >> 19) & 0x1f;
4587 + step_i = (uint8_t)(dp >> 11);
4588 + step_s = (uint8_t)(dp >> 6) & 0x1f;
4589 + pn = (uint8_t)dp & 0x3f;
4590 +
4591 + *maxp = ((pn<<2) * 100)/256;
4592 +
4593 + if (mn == 0)
4594 + *maxth = ma;
4595 + else
4596 + *maxth = ((ma+256) * (1<<(mn-1)));
4597 +
4598 + if (step_s == 0)
4599 + *minth = *maxth - step_i;
4600 + else
4601 + *minth = *maxth - (256 + step_i) * (1<<(step_s - 1));
4602 +}
4603 +
4604 +void qbman_cgr_attr_wred_get_parm_dp(struct qbman_attr *d, uint32_t idx,
4605 + uint32_t *dp)
4606 +{
4607 + uint32_t *p = ATTR32(d);
4608 + struct qb_attr_code code_wred_parm_dp = QB_CODE(WRED_PARM_DP_WORD(idx),
4609 + 0, 8);
4610 + *dp = qb_attr_code_decode(&code_wred_parm_dp, p);
4611 +}
4612 +
4613 +/* Query CGR/CCGR/CQ statistics */
4614 +static struct qb_attr_code code_cgr_stat_ct = QB_CODE(4, 0, 32);
4615 +static struct qb_attr_code code_cgr_stat_frame_cnt_lo = QB_CODE(4, 0, 32);
4616 +static struct qb_attr_code code_cgr_stat_frame_cnt_hi = QB_CODE(5, 0, 8);
4617 +static struct qb_attr_code code_cgr_stat_byte_cnt_lo = QB_CODE(6, 0, 32);
4618 +static struct qb_attr_code code_cgr_stat_byte_cnt_hi = QB_CODE(7, 0, 16);
4619 +static int qbman_cgr_statistics_query(struct qbman_swp *s, uint32_t cgid,
4620 + int clear, uint32_t command_type,
4621 + uint64_t *frame_cnt, uint64_t *byte_cnt)
4622 +{
4623 + uint32_t *p;
4624 + uint32_t verb, rslt;
4625 + uint32_t query_verb;
4626 + uint32_t hi, lo;
4627 +
4628 + p = qbman_swp_mc_start(s);
4629 + if (!p)
4630 + return -EBUSY;
4631 +
4632 + qb_attr_code_encode(&code_cgr_cgid, p, cgid);
4633 + if (command_type < 2)
4634 + qb_attr_code_encode(&code_cgr_stat_ct, p, command_type);
4635 + query_verb = clear ?
4636 + QBMAN_CGR_STAT_QUERY_CLR : QBMAN_CGR_STAT_QUERY;
4637 + p = qbman_swp_mc_complete(s, p, p[0] | query_verb);
4638 +
4639 + /* Decode the outcome */
4640 + verb = qb_attr_code_decode(&code_generic_verb, p);
4641 + rslt = qb_attr_code_decode(&code_generic_rslt, p);
4642 + BUG_ON(verb != query_verb);
4643 +
4644 + /* Determine success or failure */
4645 + if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
4646 + pr_err("Query statistics of CGID 0x%x failed,", cgid);
4647 + pr_err(" verb=0x%02x code=0x%02x\n", verb, rslt);
4648 + return -EIO;
4649 + }
4650 +
4651 + if (*frame_cnt) {
4652 + hi = qb_attr_code_decode(&code_cgr_stat_frame_cnt_hi, p);
4653 + lo = qb_attr_code_decode(&code_cgr_stat_frame_cnt_lo, p);
4654 + *frame_cnt = ((uint64_t)hi << 32) | (uint64_t)lo;
4655 + }
4656 + if (*byte_cnt) {
4657 + hi = qb_attr_code_decode(&code_cgr_stat_byte_cnt_hi, p);
4658 + lo = qb_attr_code_decode(&code_cgr_stat_byte_cnt_lo, p);
4659 + *byte_cnt = ((uint64_t)hi << 32) | (uint64_t)lo;
4660 + }
4661 +
4662 + return 0;
4663 +}
4664 +
4665 +int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
4666 + uint64_t *frame_cnt, uint64_t *byte_cnt)
4667 +{
4668 + return qbman_cgr_statistics_query(s, cgid, clear, 0xff,
4669 + frame_cnt, byte_cnt);
4670 +}
4671 +
4672 +int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
4673 + uint64_t *frame_cnt, uint64_t *byte_cnt)
4674 +{
4675 + return qbman_cgr_statistics_query(s, cgid, clear, 1,
4676 + frame_cnt, byte_cnt);
4677 +}
4678 +
4679 +int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
4680 + uint64_t *frame_cnt, uint64_t *byte_cnt)
4681 +{
4682 + return qbman_cgr_statistics_query(s, cgid, clear, 0,
4683 + frame_cnt, byte_cnt);
4684 +}
4685 --- /dev/null
4686 +++ b/drivers/staging/fsl-mc/bus/dpio/qbman_debug.h
4687 @@ -0,0 +1,136 @@
4688 +/* Copyright (C) 2015 Freescale Semiconductor, Inc.
4689 + *
4690 + * Redistribution and use in source and binary forms, with or without
4691 + * modification, are permitted provided that the following conditions are met:
4692 + * * Redistributions of source code must retain the above copyright
4693 + * notice, this list of conditions and the following disclaimer.
4694 + * * Redistributions in binary form must reproduce the above copyright
4695 + * notice, this list of conditions and the following disclaimer in the
4696 + * documentation and/or other materials provided with the distribution.
4697 + * * Neither the name of Freescale Semiconductor nor the
4698 + * names of its contributors may be used to endorse or promote products
4699 + * derived from this software without specific prior written permission.
4700 + *
4701 + *
4702 + * ALTERNATIVELY, this software may be distributed under the terms of the
4703 + * GNU General Public License ("GPL") as published by the Free Software
4704 + * Foundation, either version 2 of that License or (at your option) any
4705 + * later version.
4706 + *
4707 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
4708 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
4709 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
4710 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
4711 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
4712 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
4713 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
4714 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
4715 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
4716 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
4717 + */
4718 +
4719 +struct qbman_attr {
4720 + uint32_t dont_manipulate_directly[40];
4721 +};
4722 +
4723 +/* Buffer pool query commands */
4724 +int qbman_bp_query(struct qbman_swp *s, uint32_t bpid,
4725 + struct qbman_attr *a);
4726 +void qbman_bp_attr_get_bdi(struct qbman_attr *a, int *bdi, int *va, int *wae);
4727 +void qbman_bp_attr_get_swdet(struct qbman_attr *a, uint32_t *swdet);
4728 +void qbman_bp_attr_get_swdxt(struct qbman_attr *a, uint32_t *swdxt);
4729 +void qbman_bp_attr_get_hwdet(struct qbman_attr *a, uint32_t *hwdet);
4730 +void qbman_bp_attr_get_hwdxt(struct qbman_attr *a, uint32_t *hwdxt);
4731 +void qbman_bp_attr_get_swset(struct qbman_attr *a, uint32_t *swset);
4732 +void qbman_bp_attr_get_swsxt(struct qbman_attr *a, uint32_t *swsxt);
4733 +void qbman_bp_attr_get_vbpid(struct qbman_attr *a, uint32_t *vbpid);
4734 +void qbman_bp_attr_get_icid(struct qbman_attr *a, uint32_t *icid, int *pl);
4735 +void qbman_bp_attr_get_bpscn_addr(struct qbman_attr *a, uint64_t *bpscn_addr);
4736 +void qbman_bp_attr_get_bpscn_ctx(struct qbman_attr *a, uint64_t *bpscn_ctx);
4737 +void qbman_bp_attr_get_hw_targ(struct qbman_attr *a, uint32_t *hw_targ);
4738 +int qbman_bp_info_has_free_bufs(struct qbman_attr *a);
4739 +int qbman_bp_info_is_depleted(struct qbman_attr *a);
4740 +int qbman_bp_info_is_surplus(struct qbman_attr *a);
4741 +uint32_t qbman_bp_info_num_free_bufs(struct qbman_attr *a);
4742 +uint32_t qbman_bp_info_hdptr(struct qbman_attr *a);
4743 +uint32_t qbman_bp_info_sdcnt(struct qbman_attr *a);
4744 +uint32_t qbman_bp_info_hdcnt(struct qbman_attr *a);
4745 +uint32_t qbman_bp_info_sscnt(struct qbman_attr *a);
4746 +
4747 +/* FQ query function for programmable fields */
4748 +int qbman_fq_query(struct qbman_swp *s, uint32_t fqid,
4749 + struct qbman_attr *desc);
4750 +void qbman_fq_attr_get_fqctrl(struct qbman_attr *d, uint32_t *fqctrl);
4751 +void qbman_fq_attr_get_cgrid(struct qbman_attr *d, uint32_t *cgrid);
4752 +void qbman_fq_attr_get_destwq(struct qbman_attr *d, uint32_t *destwq);
4753 +void qbman_fq_attr_get_icscred(struct qbman_attr *d, uint32_t *icscred);
4754 +void qbman_fq_attr_get_tdthresh(struct qbman_attr *d, uint32_t *tdthresh);
4755 +void qbman_fq_attr_get_oa(struct qbman_attr *d,
4756 + int *oa_ics, int *oa_cgr, int32_t *oa_len);
4757 +void qbman_fq_attr_get_mctl(struct qbman_attr *d,
4758 + int *bdi, int *ff, int *va, int *ps);
4759 +void qbman_fq_attr_get_ctx(struct qbman_attr *d, uint32_t *hi, uint32_t *lo);
4760 +void qbman_fq_attr_get_icid(struct qbman_attr *d, uint32_t *icid, int *pl);
4761 +void qbman_fq_attr_get_vfqid(struct qbman_attr *d, uint32_t *vfqid);
4762 +void qbman_fq_attr_get_erfqid(struct qbman_attr *d, uint32_t *erfqid);
4763 +
4764 +/* FQ query command for non-programmable fields*/
4765 +enum qbman_fq_schedstate_e {
4766 + qbman_fq_schedstate_oos = 0,
4767 + qbman_fq_schedstate_retired,
4768 + qbman_fq_schedstate_tentatively_scheduled,
4769 + qbman_fq_schedstate_truly_scheduled,
4770 + qbman_fq_schedstate_parked,
4771 + qbman_fq_schedstate_held_active,
4772 +};
4773 +
4774 +int qbman_fq_query_state(struct qbman_swp *s, uint32_t fqid,
4775 + struct qbman_attr *state);
4776 +uint32_t qbman_fq_state_schedstate(const struct qbman_attr *state);
4777 +int qbman_fq_state_force_eligible(const struct qbman_attr *state);
4778 +int qbman_fq_state_xoff(const struct qbman_attr *state);
4779 +int qbman_fq_state_retirement_pending(const struct qbman_attr *state);
4780 +int qbman_fq_state_overflow_error(const struct qbman_attr *state);
4781 +uint32_t qbman_fq_state_frame_count(const struct qbman_attr *state);
4782 +uint32_t qbman_fq_state_byte_count(const struct qbman_attr *state);
4783 +
4784 +/* CGR query */
4785 +int qbman_cgr_query(struct qbman_swp *s, uint32_t cgid,
4786 + struct qbman_attr *attr);
4787 +void qbman_cgr_attr_get_ctl1(struct qbman_attr *d, int *cscn_wq_en_enter,
4788 + int *cscn_wq_en_exit, int *cscn_wq_icd);
4789 +void qbman_cgr_attr_get_mode(struct qbman_attr *d, uint32_t *mode,
4790 + int *rej_cnt_mode, int *cscn_bdi);
4791 +void qbman_cgr_attr_get_ctl2(struct qbman_attr *d, int *cscn_wr_en_enter,
4792 + int *cscn_wr_en_exit, int *cg_wr_ae,
4793 + int *cscn_dcp_en, int *cg_wr_va);
4794 +void qbman_cgr_attr_get_iwc(struct qbman_attr *d, int *i_cnt_wr_en,
4795 + uint32_t *i_cnt_wr_bnd);
4796 +void qbman_cgr_attr_get_tdc(struct qbman_attr *d, int *td_en);
4797 +void qbman_cgr_attr_get_cs_thres(struct qbman_attr *d, uint32_t *cs_thres);
4798 +void qbman_cgr_attr_get_cs_thres_x(struct qbman_attr *d,
4799 + uint32_t *cs_thres_x);
4800 +void qbman_cgr_attr_get_td_thres(struct qbman_attr *d, uint32_t *td_thres);
4801 +void qbman_cgr_attr_get_cscn_tdcp(struct qbman_attr *d, uint32_t *cscn_tdcp);
4802 +void qbman_cgr_attr_get_cscn_wqid(struct qbman_attr *d, uint32_t *cscn_wqid);
4803 +void qbman_cgr_attr_get_cscn_vcgid(struct qbman_attr *d,
4804 + uint32_t *cscn_vcgid);
4805 +void qbman_cgr_attr_get_cg_icid(struct qbman_attr *d, uint32_t *icid,
4806 + int *pl);
4807 +void qbman_cgr_attr_get_cg_wr_addr(struct qbman_attr *d,
4808 + uint64_t *cg_wr_addr);
4809 +void qbman_cgr_attr_get_cscn_ctx(struct qbman_attr *d, uint64_t *cscn_ctx);
4810 +void qbman_cgr_attr_wred_get_edp(struct qbman_attr *d, uint32_t idx,
4811 + int *edp);
4812 +void qbman_cgr_attr_wred_dp_decompose(uint32_t dp, uint64_t *minth,
4813 + uint64_t *maxth, uint8_t *maxp);
4814 +void qbman_cgr_attr_wred_get_parm_dp(struct qbman_attr *d, uint32_t idx,
4815 + uint32_t *dp);
4816 +
4817 +/* CGR/CCGR/CQ statistics query */
4818 +int qbman_cgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
4819 + uint64_t *frame_cnt, uint64_t *byte_cnt);
4820 +int qbman_ccgr_reject_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
4821 + uint64_t *frame_cnt, uint64_t *byte_cnt);
4822 +int qbman_cq_dequeue_statistics(struct qbman_swp *s, uint32_t cgid, int clear,
4823 + uint64_t *frame_cnt, uint64_t *byte_cnt);
4824 --- /dev/null
4825 +++ b/drivers/staging/fsl-mc/bus/dpio/qbman_portal.c
4826 @@ -0,0 +1,1212 @@
4827 +/* Copyright (C) 2014 Freescale Semiconductor, Inc.
4828 + *
4829 + * Redistribution and use in source and binary forms, with or without
4830 + * modification, are permitted provided that the following conditions are met:
4831 + * * Redistributions of source code must retain the above copyright
4832 + * notice, this list of conditions and the following disclaimer.
4833 + * * Redistributions in binary form must reproduce the above copyright
4834 + * notice, this list of conditions and the following disclaimer in the
4835 + * documentation and/or other materials provided with the distribution.
4836 + * * Neither the name of Freescale Semiconductor nor the
4837 + * names of its contributors may be used to endorse or promote products
4838 + * derived from this software without specific prior written permission.
4839 + *
4840 + *
4841 + * ALTERNATIVELY, this software may be distributed under the terms of the
4842 + * GNU General Public License ("GPL") as published by the Free Software
4843 + * Foundation, either version 2 of that License or (at your option) any
4844 + * later version.
4845 + *
4846 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
4847 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
4848 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
4849 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
4850 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
4851 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
4852 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
4853 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
4854 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
4855 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
4856 + */
4857 +
4858 +#include "qbman_portal.h"
4859 +
4860 +/* QBMan portal management command codes */
4861 +#define QBMAN_MC_ACQUIRE 0x30
4862 +#define QBMAN_WQCHAN_CONFIGURE 0x46
4863 +
4864 +/* CINH register offsets */
4865 +#define QBMAN_CINH_SWP_EQAR 0x8c0
4866 +#define QBMAN_CINH_SWP_DQPI 0xa00
4867 +#define QBMAN_CINH_SWP_DCAP 0xac0
4868 +#define QBMAN_CINH_SWP_SDQCR 0xb00
4869 +#define QBMAN_CINH_SWP_RAR 0xcc0
4870 +#define QBMAN_CINH_SWP_ISR 0xe00
4871 +#define QBMAN_CINH_SWP_IER 0xe40
4872 +#define QBMAN_CINH_SWP_ISDR 0xe80
4873 +#define QBMAN_CINH_SWP_IIR 0xec0
4874 +
4875 +/* CENA register offsets */
4876 +#define QBMAN_CENA_SWP_EQCR(n) (0x000 + ((uint32_t)(n) << 6))
4877 +#define QBMAN_CENA_SWP_DQRR(n) (0x200 + ((uint32_t)(n) << 6))
4878 +#define QBMAN_CENA_SWP_RCR(n) (0x400 + ((uint32_t)(n) << 6))
4879 +#define QBMAN_CENA_SWP_CR 0x600
4880 +#define QBMAN_CENA_SWP_RR(vb) (0x700 + ((uint32_t)(vb) >> 1))
4881 +#define QBMAN_CENA_SWP_VDQCR 0x780
4882 +
4883 +/* Reverse mapping of QBMAN_CENA_SWP_DQRR() */
4884 +#define QBMAN_IDX_FROM_DQRR(p) (((unsigned long)p & 0x1ff) >> 6)
4885 +
4886 +/* QBMan FQ management command codes */
4887 +#define QBMAN_FQ_SCHEDULE 0x48
4888 +#define QBMAN_FQ_FORCE 0x49
4889 +#define QBMAN_FQ_XON 0x4d
4890 +#define QBMAN_FQ_XOFF 0x4e
4891 +
4892 +/*******************************/
4893 +/* Pre-defined attribute codes */
4894 +/*******************************/
4895 +
4896 +struct qb_attr_code code_generic_verb = QB_CODE(0, 0, 7);
4897 +struct qb_attr_code code_generic_rslt = QB_CODE(0, 8, 8);
4898 +
4899 +/*************************/
4900 +/* SDQCR attribute codes */
4901 +/*************************/
4902 +
4903 +/* we put these here because at least some of them are required by
4904 + * qbman_swp_init() */
4905 +struct qb_attr_code code_sdqcr_dct = QB_CODE(0, 24, 2);
4906 +struct qb_attr_code code_sdqcr_fc = QB_CODE(0, 29, 1);
4907 +struct qb_attr_code code_sdqcr_tok = QB_CODE(0, 16, 8);
4908 +#define CODE_SDQCR_DQSRC(n) QB_CODE(0, n, 1)
4909 +enum qbman_sdqcr_dct {
4910 + qbman_sdqcr_dct_null = 0,
4911 + qbman_sdqcr_dct_prio_ics,
4912 + qbman_sdqcr_dct_active_ics,
4913 + qbman_sdqcr_dct_active
4914 +};
4915 +enum qbman_sdqcr_fc {
4916 + qbman_sdqcr_fc_one = 0,
4917 + qbman_sdqcr_fc_up_to_3 = 1
4918 +};
4919 +struct qb_attr_code code_sdqcr_dqsrc = QB_CODE(0, 0, 16);
4920 +
4921 +/*********************************/
4922 +/* Portal constructor/destructor */
4923 +/*********************************/
4924 +
4925 +/* Software portals should always be in the power-on state when we initialise,
4926 + * due to the CCSR-based portal reset functionality that MC has.
4927 + *
4928 + * Erk! Turns out that QMan versions prior to 4.1 do not correctly reset DQRR
4929 + * valid-bits, so we need to support a workaround where we don't trust
4930 + * valid-bits when detecting new entries until any stale ring entries have been
4931 + * overwritten at least once. The idea is that we read PI for the first few
4932 + * entries, then switch to valid-bit after that. The trick is to clear the
4933 + * bug-work-around boolean once the PI wraps around the ring for the first time.
4934 + *
4935 + * Note: this still carries a slight additional cost once the decrementer hits
4936 + * zero, so ideally the workaround should only be compiled in if the compiled
4937 + * image needs to support affected chips. We use WORKAROUND_DQRR_RESET_BUG for
4938 + * this.
4939 + */
4940 +struct qbman_swp *qbman_swp_init(const struct qbman_swp_desc *d)
4941 +{
4942 + int ret;
4943 + struct qbman_swp *p = kmalloc(sizeof(*p), GFP_KERNEL);
4944 +
4945 + if (!p)
4946 + return NULL;
4947 + p->desc = d;
4948 +#ifdef QBMAN_CHECKING
4949 + p->mc.check = swp_mc_can_start;
4950 +#endif
4951 + p->mc.valid_bit = QB_VALID_BIT;
4952 + p->sdq = 0;
4953 + qb_attr_code_encode(&code_sdqcr_dct, &p->sdq, qbman_sdqcr_dct_prio_ics);
4954 + qb_attr_code_encode(&code_sdqcr_fc, &p->sdq, qbman_sdqcr_fc_up_to_3);
4955 + qb_attr_code_encode(&code_sdqcr_tok, &p->sdq, 0xbb);
4956 + atomic_set(&p->vdq.busy, 1);
4957 + p->vdq.valid_bit = QB_VALID_BIT;
4958 + p->dqrr.next_idx = 0;
4959 + p->dqrr.valid_bit = QB_VALID_BIT;
4960 + /* TODO: should also read PI/CI type registers and check that they're on
4961 + * PoR values. If we're asked to initialise portals that aren't in reset
4962 + * state, bad things will follow. */
4963 +#ifdef WORKAROUND_DQRR_RESET_BUG
4964 + p->dqrr.reset_bug = 1;
4965 +#endif
4966 + if ((p->desc->qman_version & 0xFFFF0000) < QMAN_REV_4100)
4967 + p->dqrr.dqrr_size = 4;
4968 + else
4969 + p->dqrr.dqrr_size = 8;
4970 + ret = qbman_swp_sys_init(&p->sys, d, p->dqrr.dqrr_size);
4971 + if (ret) {
4972 + kfree(p);
4973 + pr_err("qbman_swp_sys_init() failed %d\n", ret);
4974 + return NULL;
4975 + }
4976 + /* SDQCR needs to be initialized to 0 when no channels are
4977 + being dequeued from or else the QMan HW will indicate an
4978 + error. The values that were calculated above will be
4979 + applied when dequeues from a specific channel are enabled */
4980 + qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_SDQCR, 0);
4981 + return p;
4982 +}
4983 +
4984 +void qbman_swp_finish(struct qbman_swp *p)
4985 +{
4986 +#ifdef QBMAN_CHECKING
4987 + BUG_ON(p->mc.check != swp_mc_can_start);
4988 +#endif
4989 + qbman_swp_sys_finish(&p->sys);
4990 + kfree(p);
4991 +}
4992 +
4993 +const struct qbman_swp_desc *qbman_swp_get_desc(struct qbman_swp *p)
4994 +{
4995 + return p->desc;
4996 +}
4997 +
4998 +/**************/
4999 +/* Interrupts */
5000 +/**************/
5001 +
5002 +uint32_t qbman_swp_interrupt_get_vanish(struct qbman_swp *p)
5003 +{
5004 + return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISDR);
5005 +}
5006 +
5007 +void qbman_swp_interrupt_set_vanish(struct qbman_swp *p, uint32_t mask)
5008 +{
5009 + qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISDR, mask);
5010 +}
5011 +
5012 +uint32_t qbman_swp_interrupt_read_status(struct qbman_swp *p)
5013 +{
5014 + return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_ISR);
5015 +}
5016 +
5017 +void qbman_swp_interrupt_clear_status(struct qbman_swp *p, uint32_t mask)
5018 +{
5019 + qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_ISR, mask);
5020 +}
5021 +
5022 +uint32_t qbman_swp_interrupt_get_trigger(struct qbman_swp *p)
5023 +{
5024 + return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_IER);
5025 +}
5026 +
5027 +void qbman_swp_interrupt_set_trigger(struct qbman_swp *p, uint32_t mask)
5028 +{
5029 + qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_IER, mask);
5030 +}
5031 +
5032 +int qbman_swp_interrupt_get_inhibit(struct qbman_swp *p)
5033 +{
5034 + return qbman_cinh_read(&p->sys, QBMAN_CINH_SWP_IIR);
5035 +}
5036 +
5037 +void qbman_swp_interrupt_set_inhibit(struct qbman_swp *p, int inhibit)
5038 +{
5039 + qbman_cinh_write(&p->sys, QBMAN_CINH_SWP_IIR, inhibit ? 0xffffffff : 0);
5040 +}
5041 +
5042 +/***********************/
5043 +/* Management commands */
5044 +/***********************/
5045 +
5046 +/*
5047 + * Internal code common to all types of management commands.
5048 + */
5049 +
5050 +void *qbman_swp_mc_start(struct qbman_swp *p)
5051 +{
5052 + void *ret;
5053 +#ifdef QBMAN_CHECKING
5054 + BUG_ON(p->mc.check != swp_mc_can_start);
5055 +#endif
5056 + ret = qbman_cena_write_start(&p->sys, QBMAN_CENA_SWP_CR);
5057 +#ifdef QBMAN_CHECKING
5058 + if (!ret)
5059 + p->mc.check = swp_mc_can_submit;
5060 +#endif
5061 + return ret;
5062 +}
5063 +
5064 +void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint32_t cmd_verb)
5065 +{
5066 + uint32_t *v = cmd;
5067 +#ifdef QBMAN_CHECKING
5068 + BUG_ON(!p->mc.check != swp_mc_can_submit);
5069 +#endif
5070 + /* TBD: "|=" is going to hurt performance. Need to move as many fields
5071 + * out of word zero, and for those that remain, the "OR" needs to occur
5072 + * at the caller side. This debug check helps to catch cases where the
5073 + * caller wants to OR but has forgotten to do so. */
5074 + BUG_ON((*v & cmd_verb) != *v);
5075 + *v = cmd_verb | p->mc.valid_bit;
5076 + qbman_cena_write_complete(&p->sys, QBMAN_CENA_SWP_CR, cmd);
5077 +#ifdef QBMAN_CHECKING
5078 + p->mc.check = swp_mc_can_poll;
5079 +#endif
5080 +}
5081 +
5082 +void *qbman_swp_mc_result(struct qbman_swp *p)
5083 +{
5084 + uint32_t *ret, verb;
5085 +#ifdef QBMAN_CHECKING
5086 + BUG_ON(p->mc.check != swp_mc_can_poll);
5087 +#endif
5088 + qbman_cena_invalidate_prefetch(&p->sys,
5089 + QBMAN_CENA_SWP_RR(p->mc.valid_bit));
5090 + ret = qbman_cena_read(&p->sys, QBMAN_CENA_SWP_RR(p->mc.valid_bit));
5091 + /* Remove the valid-bit - command completed iff the rest is non-zero */
5092 + verb = ret[0] & ~QB_VALID_BIT;
5093 + if (!verb)
5094 + return NULL;
5095 +#ifdef QBMAN_CHECKING
5096 + p->mc.check = swp_mc_can_start;
5097 +#endif
5098 + p->mc.valid_bit ^= QB_VALID_BIT;
5099 + return ret;
5100 +}
5101 +
5102 +/***********/
5103 +/* Enqueue */
5104 +/***********/
5105 +
5106 +/* These should be const, eventually */
5107 +static struct qb_attr_code code_eq_cmd = QB_CODE(0, 0, 2);
5108 +static struct qb_attr_code code_eq_eqdi = QB_CODE(0, 3, 1);
5109 +static struct qb_attr_code code_eq_dca_en = QB_CODE(0, 15, 1);
5110 +static struct qb_attr_code code_eq_dca_pk = QB_CODE(0, 14, 1);
5111 +static struct qb_attr_code code_eq_dca_idx = QB_CODE(0, 8, 2);
5112 +static struct qb_attr_code code_eq_orp_en = QB_CODE(0, 2, 1);
5113 +static struct qb_attr_code code_eq_orp_is_nesn = QB_CODE(0, 31, 1);
5114 +static struct qb_attr_code code_eq_orp_nlis = QB_CODE(0, 30, 1);
5115 +static struct qb_attr_code code_eq_orp_seqnum = QB_CODE(0, 16, 14);
5116 +static struct qb_attr_code code_eq_opr_id = QB_CODE(1, 0, 16);
5117 +static struct qb_attr_code code_eq_tgt_id = QB_CODE(2, 0, 24);
5118 +/* static struct qb_attr_code code_eq_tag = QB_CODE(3, 0, 32); */
5119 +static struct qb_attr_code code_eq_qd_en = QB_CODE(0, 4, 1);
5120 +static struct qb_attr_code code_eq_qd_bin = QB_CODE(4, 0, 16);
5121 +static struct qb_attr_code code_eq_qd_pri = QB_CODE(4, 16, 4);
5122 +static struct qb_attr_code code_eq_rsp_stash = QB_CODE(5, 16, 1);
5123 +static struct qb_attr_code code_eq_rsp_id = QB_CODE(5, 24, 8);
5124 +static struct qb_attr_code code_eq_rsp_lo = QB_CODE(6, 0, 32);
5125 +
5126 +enum qbman_eq_cmd_e {
5127 + /* No enqueue, primarily for plugging ORP gaps for dropped frames */
5128 + qbman_eq_cmd_empty,
5129 + /* DMA an enqueue response once complete */
5130 + qbman_eq_cmd_respond,
5131 + /* DMA an enqueue response only if the enqueue fails */
5132 + qbman_eq_cmd_respond_reject
5133 +};
5134 +
5135 +void qbman_eq_desc_clear(struct qbman_eq_desc *d)
5136 +{
5137 + memset(d, 0, sizeof(*d));
5138 +}
5139 +
5140 +void qbman_eq_desc_set_no_orp(struct qbman_eq_desc *d, int respond_success)
5141 +{
5142 + uint32_t *cl = qb_cl(d);
5143 +
5144 + qb_attr_code_encode(&code_eq_orp_en, cl, 0);
5145 + qb_attr_code_encode(&code_eq_cmd, cl,
5146 + respond_success ? qbman_eq_cmd_respond :
5147 + qbman_eq_cmd_respond_reject);
5148 +}
5149 +
5150 +void qbman_eq_desc_set_orp(struct qbman_eq_desc *d, int respond_success,
5151 + uint32_t opr_id, uint32_t seqnum, int incomplete)
5152 +{
5153 + uint32_t *cl = qb_cl(d);
5154 +
5155 + qb_attr_code_encode(&code_eq_orp_en, cl, 1);
5156 + qb_attr_code_encode(&code_eq_cmd, cl,
5157 + respond_success ? qbman_eq_cmd_respond :
5158 + qbman_eq_cmd_respond_reject);
5159 + qb_attr_code_encode(&code_eq_opr_id, cl, opr_id);
5160 + qb_attr_code_encode(&code_eq_orp_seqnum, cl, seqnum);
5161 + qb_attr_code_encode(&code_eq_orp_nlis, cl, !!incomplete);
5162 +}
5163 +
5164 +void qbman_eq_desc_set_orp_hole(struct qbman_eq_desc *d, uint32_t opr_id,
5165 + uint32_t seqnum)
5166 +{
5167 + uint32_t *cl = qb_cl(d);
5168 +
5169 + qb_attr_code_encode(&code_eq_orp_en, cl, 1);
5170 + qb_attr_code_encode(&code_eq_cmd, cl, qbman_eq_cmd_empty);
5171 + qb_attr_code_encode(&code_eq_opr_id, cl, opr_id);
5172 + qb_attr_code_encode(&code_eq_orp_seqnum, cl, seqnum);
5173 + qb_attr_code_encode(&code_eq_orp_nlis, cl, 0);
5174 + qb_attr_code_encode(&code_eq_orp_is_nesn, cl, 0);
5175 +}
5176 +
5177 +void qbman_eq_desc_set_orp_nesn(struct qbman_eq_desc *d, uint32_t opr_id,
5178 + uint32_t seqnum)
5179 +{
5180 + uint32_t *cl = qb_cl(d);
5181 +
5182 + qb_attr_code_encode(&code_eq_orp_en, cl, 1);
5183 + qb_attr_code_encode(&code_eq_cmd, cl, qbman_eq_cmd_empty);
5184 + qb_attr_code_encode(&code_eq_opr_id, cl, opr_id);
5185 + qb_attr_code_encode(&code_eq_orp_seqnum, cl, seqnum);
5186 + qb_attr_code_encode(&code_eq_orp_nlis, cl, 0);
5187 + qb_attr_code_encode(&code_eq_orp_is_nesn, cl, 1);
5188 +}
5189 +
5190 +void qbman_eq_desc_set_response(struct qbman_eq_desc *d,
5191 + dma_addr_t storage_phys,
5192 + int stash)
5193 +{
5194 + uint32_t *cl = qb_cl(d);
5195 +
5196 + qb_attr_code_encode_64(&code_eq_rsp_lo, (uint64_t *)cl, storage_phys);
5197 + qb_attr_code_encode(&code_eq_rsp_stash, cl, !!stash);
5198 +}
5199 +
5200 +void qbman_eq_desc_set_token(struct qbman_eq_desc *d, uint8_t token)
5201 +{
5202 + uint32_t *cl = qb_cl(d);
5203 +
5204 + qb_attr_code_encode(&code_eq_rsp_id, cl, (uint32_t)token);
5205 +}
5206 +
5207 +void qbman_eq_desc_set_fq(struct qbman_eq_desc *d, uint32_t fqid)
5208 +{
5209 + uint32_t *cl = qb_cl(d);
5210 +
5211 + qb_attr_code_encode(&code_eq_qd_en, cl, 0);
5212 + qb_attr_code_encode(&code_eq_tgt_id, cl, fqid);
5213 +}
5214 +
5215 +void qbman_eq_desc_set_qd(struct qbman_eq_desc *d, uint32_t qdid,
5216 + uint32_t qd_bin, uint32_t qd_prio)
5217 +{
5218 + uint32_t *cl = qb_cl(d);
5219 +
5220 + qb_attr_code_encode(&code_eq_qd_en, cl, 1);
5221 + qb_attr_code_encode(&code_eq_tgt_id, cl, qdid);
5222 + qb_attr_code_encode(&code_eq_qd_bin, cl, qd_bin);
5223 + qb_attr_code_encode(&code_eq_qd_pri, cl, qd_prio);
5224 +}
5225 +
5226 +void qbman_eq_desc_set_eqdi(struct qbman_eq_desc *d, int enable)
5227 +{
5228 + uint32_t *cl = qb_cl(d);
5229 +
5230 + qb_attr_code_encode(&code_eq_eqdi, cl, !!enable);
5231 +}
5232 +
5233 +void qbman_eq_desc_set_dca(struct qbman_eq_desc *d, int enable,
5234 + uint32_t dqrr_idx, int park)
5235 +{
5236 + uint32_t *cl = qb_cl(d);
5237 +
5238 + qb_attr_code_encode(&code_eq_dca_en, cl, !!enable);
5239 + if (enable) {
5240 + qb_attr_code_encode(&code_eq_dca_pk, cl, !!park);
5241 + qb_attr_code_encode(&code_eq_dca_idx, cl, dqrr_idx);
5242 + }
5243 +}
5244 +
5245 +#define EQAR_IDX(eqar) ((eqar) & 0x7)
5246 +#define EQAR_VB(eqar) ((eqar) & 0x80)
5247 +#define EQAR_SUCCESS(eqar) ((eqar) & 0x100)
5248 +
5249 +int qbman_swp_enqueue(struct qbman_swp *s, const struct qbman_eq_desc *d,
5250 + const struct qbman_fd *fd)
5251 +{
5252 + uint32_t *p;
5253 + const uint32_t *cl = qb_cl(d);
5254 + uint32_t eqar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_EQAR);
5255 +
5256 + pr_debug("EQAR=%08x\n", eqar);
5257 + if (!EQAR_SUCCESS(eqar))
5258 + return -EBUSY;
5259 + p = qbman_cena_write_start(&s->sys,
5260 + QBMAN_CENA_SWP_EQCR(EQAR_IDX(eqar)));
5261 + word_copy(&p[1], &cl[1], 7);
5262 + word_copy(&p[8], fd, sizeof(*fd) >> 2);
5263 + /* Set the verb byte, have to substitute in the valid-bit */
5264 + p[0] = cl[0] | EQAR_VB(eqar);
5265 + qbman_cena_write_complete(&s->sys,
5266 + QBMAN_CENA_SWP_EQCR(EQAR_IDX(eqar)),
5267 + p);
5268 + return 0;
5269 +}
5270 +
5271 +/*************************/
5272 +/* Static (push) dequeue */
5273 +/*************************/
5274 +
5275 +void qbman_swp_push_get(struct qbman_swp *s, uint8_t channel_idx, int *enabled)
5276 +{
5277 + struct qb_attr_code code = CODE_SDQCR_DQSRC(channel_idx);
5278 +
5279 + BUG_ON(channel_idx > 15);
5280 + *enabled = (int)qb_attr_code_decode(&code, &s->sdq);
5281 +}
5282 +
5283 +void qbman_swp_push_set(struct qbman_swp *s, uint8_t channel_idx, int enable)
5284 +{
5285 + uint16_t dqsrc;
5286 + struct qb_attr_code code = CODE_SDQCR_DQSRC(channel_idx);
5287 +
5288 + BUG_ON(channel_idx > 15);
5289 + qb_attr_code_encode(&code, &s->sdq, !!enable);
5290 + /* Read make the complete src map. If no channels are enabled
5291 + the SDQCR must be 0 or else QMan will assert errors */
5292 + dqsrc = (uint16_t)qb_attr_code_decode(&code_sdqcr_dqsrc, &s->sdq);
5293 + if (dqsrc != 0)
5294 + qbman_cinh_write(&s->sys, QBMAN_CINH_SWP_SDQCR, s->sdq);
5295 + else
5296 + qbman_cinh_write(&s->sys, QBMAN_CINH_SWP_SDQCR, 0);
5297 +}
5298 +
5299 +/***************************/
5300 +/* Volatile (pull) dequeue */
5301 +/***************************/
5302 +
5303 +/* These should be const, eventually */
5304 +static struct qb_attr_code code_pull_dct = QB_CODE(0, 0, 2);
5305 +static struct qb_attr_code code_pull_dt = QB_CODE(0, 2, 2);
5306 +static struct qb_attr_code code_pull_rls = QB_CODE(0, 4, 1);
5307 +static struct qb_attr_code code_pull_stash = QB_CODE(0, 5, 1);
5308 +static struct qb_attr_code code_pull_numframes = QB_CODE(0, 8, 4);
5309 +static struct qb_attr_code code_pull_token = QB_CODE(0, 16, 8);
5310 +static struct qb_attr_code code_pull_dqsource = QB_CODE(1, 0, 24);
5311 +static struct qb_attr_code code_pull_rsp_lo = QB_CODE(2, 0, 32);
5312 +
5313 +enum qb_pull_dt_e {
5314 + qb_pull_dt_channel,
5315 + qb_pull_dt_workqueue,
5316 + qb_pull_dt_framequeue
5317 +};
5318 +
5319 +void qbman_pull_desc_clear(struct qbman_pull_desc *d)
5320 +{
5321 + memset(d, 0, sizeof(*d));
5322 +}
5323 +
5324 +void qbman_pull_desc_set_storage(struct qbman_pull_desc *d,
5325 + struct dpaa2_dq *storage,
5326 + dma_addr_t storage_phys,
5327 + int stash)
5328 +{
5329 + uint32_t *cl = qb_cl(d);
5330 +
5331 + /* Squiggle the pointer 'storage' into the extra 2 words of the
5332 + * descriptor (which aren't copied to the hw command) */
5333 + *(void **)&cl[4] = storage;
5334 + if (!storage) {
5335 + qb_attr_code_encode(&code_pull_rls, cl, 0);
5336 + return;
5337 + }
5338 + qb_attr_code_encode(&code_pull_rls, cl, 1);
5339 + qb_attr_code_encode(&code_pull_stash, cl, !!stash);
5340 + qb_attr_code_encode_64(&code_pull_rsp_lo, (uint64_t *)cl, storage_phys);
5341 +}
5342 +
5343 +void qbman_pull_desc_set_numframes(struct qbman_pull_desc *d, uint8_t numframes)
5344 +{
5345 + uint32_t *cl = qb_cl(d);
5346 +
5347 + BUG_ON(!numframes || (numframes > 16));
5348 + qb_attr_code_encode(&code_pull_numframes, cl,
5349 + (uint32_t)(numframes - 1));
5350 +}
5351 +
5352 +void qbman_pull_desc_set_token(struct qbman_pull_desc *d, uint8_t token)
5353 +{
5354 + uint32_t *cl = qb_cl(d);
5355 +
5356 + qb_attr_code_encode(&code_pull_token, cl, token);
5357 +}
5358 +
5359 +void qbman_pull_desc_set_fq(struct qbman_pull_desc *d, uint32_t fqid)
5360 +{
5361 + uint32_t *cl = qb_cl(d);
5362 +
5363 + qb_attr_code_encode(&code_pull_dct, cl, 1);
5364 + qb_attr_code_encode(&code_pull_dt, cl, qb_pull_dt_framequeue);
5365 + qb_attr_code_encode(&code_pull_dqsource, cl, fqid);
5366 +}
5367 +
5368 +void qbman_pull_desc_set_wq(struct qbman_pull_desc *d, uint32_t wqid,
5369 + enum qbman_pull_type_e dct)
5370 +{
5371 + uint32_t *cl = qb_cl(d);
5372 +
5373 + qb_attr_code_encode(&code_pull_dct, cl, dct);
5374 + qb_attr_code_encode(&code_pull_dt, cl, qb_pull_dt_workqueue);
5375 + qb_attr_code_encode(&code_pull_dqsource, cl, wqid);
5376 +}
5377 +
5378 +void qbman_pull_desc_set_channel(struct qbman_pull_desc *d, uint32_t chid,
5379 + enum qbman_pull_type_e dct)
5380 +{
5381 + uint32_t *cl = qb_cl(d);
5382 +
5383 + qb_attr_code_encode(&code_pull_dct, cl, dct);
5384 + qb_attr_code_encode(&code_pull_dt, cl, qb_pull_dt_channel);
5385 + qb_attr_code_encode(&code_pull_dqsource, cl, chid);
5386 +}
5387 +
5388 +int qbman_swp_pull(struct qbman_swp *s, struct qbman_pull_desc *d)
5389 +{
5390 + uint32_t *p;
5391 + uint32_t *cl = qb_cl(d);
5392 +
5393 + if (!atomic_dec_and_test(&s->vdq.busy)) {
5394 + atomic_inc(&s->vdq.busy);
5395 + return -EBUSY;
5396 + }
5397 + s->vdq.storage = *(void **)&cl[4];
5398 + qb_attr_code_encode(&code_pull_token, cl, 1);
5399 + p = qbman_cena_write_start(&s->sys, QBMAN_CENA_SWP_VDQCR);
5400 + word_copy(&p[1], &cl[1], 3);
5401 + /* Set the verb byte, have to substitute in the valid-bit */
5402 + p[0] = cl[0] | s->vdq.valid_bit;
5403 + s->vdq.valid_bit ^= QB_VALID_BIT;
5404 + qbman_cena_write_complete(&s->sys, QBMAN_CENA_SWP_VDQCR, p);
5405 + return 0;
5406 +}
5407 +
5408 +/****************/
5409 +/* Polling DQRR */
5410 +/****************/
5411 +
5412 +static struct qb_attr_code code_dqrr_verb = QB_CODE(0, 0, 8);
5413 +static struct qb_attr_code code_dqrr_response = QB_CODE(0, 0, 7);
5414 +static struct qb_attr_code code_dqrr_stat = QB_CODE(0, 8, 8);
5415 +static struct qb_attr_code code_dqrr_seqnum = QB_CODE(0, 16, 14);
5416 +static struct qb_attr_code code_dqrr_odpid = QB_CODE(1, 0, 16);
5417 +/* static struct qb_attr_code code_dqrr_tok = QB_CODE(1, 24, 8); */
5418 +static struct qb_attr_code code_dqrr_fqid = QB_CODE(2, 0, 24);
5419 +static struct qb_attr_code code_dqrr_byte_count = QB_CODE(4, 0, 32);
5420 +static struct qb_attr_code code_dqrr_frame_count = QB_CODE(5, 0, 24);
5421 +static struct qb_attr_code code_dqrr_ctx_lo = QB_CODE(6, 0, 32);
5422 +
5423 +#define QBMAN_RESULT_DQ 0x60
5424 +#define QBMAN_RESULT_FQRN 0x21
5425 +#define QBMAN_RESULT_FQRNI 0x22
5426 +#define QBMAN_RESULT_FQPN 0x24
5427 +#define QBMAN_RESULT_FQDAN 0x25
5428 +#define QBMAN_RESULT_CDAN 0x26
5429 +#define QBMAN_RESULT_CSCN_MEM 0x27
5430 +#define QBMAN_RESULT_CGCU 0x28
5431 +#define QBMAN_RESULT_BPSCN 0x29
5432 +#define QBMAN_RESULT_CSCN_WQ 0x2a
5433 +
5434 +static struct qb_attr_code code_dqpi_pi = QB_CODE(0, 0, 4);
5435 +
5436 +/* NULL return if there are no unconsumed DQRR entries. Returns a DQRR entry
5437 + * only once, so repeated calls can return a sequence of DQRR entries, without
5438 + * requiring they be consumed immediately or in any particular order. */
5439 +const struct dpaa2_dq *qbman_swp_dqrr_next(struct qbman_swp *s)
5440 +{
5441 + uint32_t verb;
5442 + uint32_t response_verb;
5443 + uint32_t flags;
5444 + const struct dpaa2_dq *dq;
5445 + const uint32_t *p;
5446 +
5447 + /* Before using valid-bit to detect if something is there, we have to
5448 + * handle the case of the DQRR reset bug... */
5449 +#ifdef WORKAROUND_DQRR_RESET_BUG
5450 + if (unlikely(s->dqrr.reset_bug)) {
5451 + /* We pick up new entries by cache-inhibited producer index,
5452 + * which means that a non-coherent mapping would require us to
5453 + * invalidate and read *only* once that PI has indicated that
5454 + * there's an entry here. The first trip around the DQRR ring
5455 + * will be much less efficient than all subsequent trips around
5456 + * it...
5457 + */
5458 + uint32_t dqpi = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_DQPI);
5459 + uint32_t pi = qb_attr_code_decode(&code_dqpi_pi, &dqpi);
5460 + /* there are new entries iff pi != next_idx */
5461 + if (pi == s->dqrr.next_idx)
5462 + return NULL;
5463 + /* if next_idx is/was the last ring index, and 'pi' is
5464 + * different, we can disable the workaround as all the ring
5465 + * entries have now been DMA'd to so valid-bit checking is
5466 + * repaired. Note: this logic needs to be based on next_idx
5467 + * (which increments one at a time), rather than on pi (which
5468 + * can burst and wrap-around between our snapshots of it).
5469 + */
5470 + if (s->dqrr.next_idx == (s->dqrr.dqrr_size - 1)) {
5471 + pr_debug("DEBUG: next_idx=%d, pi=%d, clear reset bug\n",
5472 + s->dqrr.next_idx, pi);
5473 + s->dqrr.reset_bug = 0;
5474 + }
5475 + qbman_cena_invalidate_prefetch(&s->sys,
5476 + QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
5477 + }
5478 +#endif
5479 +
5480 + dq = qbman_cena_read(&s->sys, QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
5481 + p = qb_cl(dq);
5482 + verb = qb_attr_code_decode(&code_dqrr_verb, p);
5483 +
5484 + /* If the valid-bit isn't of the expected polarity, nothing there. Note,
5485 + * in the DQRR reset bug workaround, we shouldn't need to skip these
5486 + * check, because we've already determined that a new entry is available
5487 + * and we've invalidated the cacheline before reading it, so the
5488 + * valid-bit behaviour is repaired and should tell us what we already
5489 + * knew from reading PI.
5490 + */
5491 + if ((verb & QB_VALID_BIT) != s->dqrr.valid_bit) {
5492 + qbman_cena_invalidate_prefetch(&s->sys,
5493 + QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
5494 + return NULL;
5495 + }
5496 + /* There's something there. Move "next_idx" attention to the next ring
5497 + * entry (and prefetch it) before returning what we found. */
5498 + s->dqrr.next_idx++;
5499 + s->dqrr.next_idx &= s->dqrr.dqrr_size - 1; /* Wrap around */
5500 + /* TODO: it's possible to do all this without conditionals, optimise it
5501 + * later. */
5502 + if (!s->dqrr.next_idx)
5503 + s->dqrr.valid_bit ^= QB_VALID_BIT;
5504 +
5505 + /* If this is the final response to a volatile dequeue command
5506 + indicate that the vdq is no longer busy */
5507 + flags = dpaa2_dq_flags(dq);
5508 + response_verb = qb_attr_code_decode(&code_dqrr_response, &verb);
5509 + if ((response_verb == QBMAN_RESULT_DQ) &&
5510 + (flags & DPAA2_DQ_STAT_VOLATILE) &&
5511 + (flags & DPAA2_DQ_STAT_EXPIRED))
5512 + atomic_inc(&s->vdq.busy);
5513 +
5514 + qbman_cena_invalidate_prefetch(&s->sys,
5515 + QBMAN_CENA_SWP_DQRR(s->dqrr.next_idx));
5516 + return dq;
5517 +}
5518 +
5519 +/* Consume DQRR entries previously returned from qbman_swp_dqrr_next(). */
5520 +void qbman_swp_dqrr_consume(struct qbman_swp *s, const struct dpaa2_dq *dq)
5521 +{
5522 + qbman_cinh_write(&s->sys, QBMAN_CINH_SWP_DCAP, QBMAN_IDX_FROM_DQRR(dq));
5523 +}
5524 +
5525 +/*********************************/
5526 +/* Polling user-provided storage */
5527 +/*********************************/
5528 +
5529 +int qbman_result_has_new_result(struct qbman_swp *s,
5530 + const struct dpaa2_dq *dq)
5531 +{
5532 + /* To avoid converting the little-endian DQ entry to host-endian prior
5533 + * to us knowing whether there is a valid entry or not (and run the
5534 + * risk of corrupting the incoming hardware LE write), we detect in
5535 + * hardware endianness rather than host. This means we need a different
5536 + * "code" depending on whether we are BE or LE in software, which is
5537 + * where DQRR_TOK_OFFSET comes in... */
5538 + static struct qb_attr_code code_dqrr_tok_detect =
5539 + QB_CODE(0, DQRR_TOK_OFFSET, 8);
5540 + /* The user trying to poll for a result treats "dq" as const. It is
5541 + * however the same address that was provided to us non-const in the
5542 + * first place, for directing hardware DMA to. So we can cast away the
5543 + * const because it is mutable from our perspective. */
5544 + uint32_t *p = qb_cl((struct dpaa2_dq *)dq);
5545 + uint32_t token;
5546 +
5547 + token = qb_attr_code_decode(&code_dqrr_tok_detect, &p[1]);
5548 + if (token != 1)
5549 + return 0;
5550 + qb_attr_code_encode(&code_dqrr_tok_detect, &p[1], 0);
5551 +
5552 + /* Only now do we convert from hardware to host endianness. Also, as we
5553 + * are returning success, the user has promised not to call us again, so
5554 + * there's no risk of us converting the endianness twice... */
5555 + make_le32_n(p, 16);
5556 +
5557 + /* VDQCR "no longer busy" hook - not quite the same as DQRR, because the
5558 + * fact "VDQCR" shows busy doesn't mean that the result we're looking at
5559 + * is from the same command. Eg. we may be looking at our 10th dequeue
5560 + * result from our first VDQCR command, yet the second dequeue command
5561 + * could have been kicked off already, after seeing the 1st result. Ie.
5562 + * the result we're looking at is not necessarily proof that we can
5563 + * reset "busy". We instead base the decision on whether the current
5564 + * result is sitting at the first 'storage' location of the busy
5565 + * command. */
5566 + if (s->vdq.storage == dq) {
5567 + s->vdq.storage = NULL;
5568 + atomic_inc(&s->vdq.busy);
5569 + }
5570 + return 1;
5571 +}
5572 +
5573 +/********************************/
5574 +/* Categorising qbman_result */
5575 +/********************************/
5576 +
5577 +static struct qb_attr_code code_result_in_mem =
5578 + QB_CODE(0, QBMAN_RESULT_VERB_OFFSET_IN_MEM, 7);
5579 +
5580 +static inline int __qbman_result_is_x(const struct dpaa2_dq *dq, uint32_t x)
5581 +{
5582 + const uint32_t *p = qb_cl(dq);
5583 + uint32_t response_verb = qb_attr_code_decode(&code_dqrr_response, p);
5584 +
5585 + return response_verb == x;
5586 +}
5587 +
5588 +static inline int __qbman_result_is_x_in_mem(const struct dpaa2_dq *dq,
5589 + uint32_t x)
5590 +{
5591 + const uint32_t *p = qb_cl(dq);
5592 + uint32_t response_verb = qb_attr_code_decode(&code_result_in_mem, p);
5593 +
5594 + return (response_verb == x);
5595 +}
5596 +
5597 +int qbman_result_is_DQ(const struct dpaa2_dq *dq)
5598 +{
5599 + return __qbman_result_is_x(dq, QBMAN_RESULT_DQ);
5600 +}
5601 +
5602 +int qbman_result_is_FQDAN(const struct dpaa2_dq *dq)
5603 +{
5604 + return __qbman_result_is_x(dq, QBMAN_RESULT_FQDAN);
5605 +}
5606 +
5607 +int qbman_result_is_CDAN(const struct dpaa2_dq *dq)
5608 +{
5609 + return __qbman_result_is_x(dq, QBMAN_RESULT_CDAN);
5610 +}
5611 +
5612 +int qbman_result_is_CSCN(const struct dpaa2_dq *dq)
5613 +{
5614 + return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_CSCN_MEM) ||
5615 + __qbman_result_is_x(dq, QBMAN_RESULT_CSCN_WQ);
5616 +}
5617 +
5618 +int qbman_result_is_BPSCN(const struct dpaa2_dq *dq)
5619 +{
5620 + return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_BPSCN);
5621 +}
5622 +
5623 +int qbman_result_is_CGCU(const struct dpaa2_dq *dq)
5624 +{
5625 + return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_CGCU);
5626 +}
5627 +
5628 +int qbman_result_is_FQRN(const struct dpaa2_dq *dq)
5629 +{
5630 + return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_FQRN);
5631 +}
5632 +
5633 +int qbman_result_is_FQRNI(const struct dpaa2_dq *dq)
5634 +{
5635 + return __qbman_result_is_x_in_mem(dq, QBMAN_RESULT_FQRNI);
5636 +}
5637 +
5638 +int qbman_result_is_FQPN(const struct dpaa2_dq *dq)
5639 +{
5640 + return __qbman_result_is_x(dq, QBMAN_RESULT_FQPN);
5641 +}
5642 +
5643 +/*********************************/
5644 +/* Parsing frame dequeue results */
5645 +/*********************************/
5646 +
5647 +/* These APIs assume qbman_result_is_DQ() is TRUE */
5648 +
5649 +uint32_t dpaa2_dq_flags(const struct dpaa2_dq *dq)
5650 +{
5651 + const uint32_t *p = qb_cl(dq);
5652 +
5653 + return qb_attr_code_decode(&code_dqrr_stat, p);
5654 +}
5655 +
5656 +uint16_t dpaa2_dq_seqnum(const struct dpaa2_dq *dq)
5657 +{
5658 + const uint32_t *p = qb_cl(dq);
5659 +
5660 + return (uint16_t)qb_attr_code_decode(&code_dqrr_seqnum, p);
5661 +}
5662 +
5663 +uint16_t dpaa2_dq_odpid(const struct dpaa2_dq *dq)
5664 +{
5665 + const uint32_t *p = qb_cl(dq);
5666 +
5667 + return (uint16_t)qb_attr_code_decode(&code_dqrr_odpid, p);
5668 +}
5669 +
5670 +uint32_t dpaa2_dq_fqid(const struct dpaa2_dq *dq)
5671 +{
5672 + const uint32_t *p = qb_cl(dq);
5673 +
5674 + return qb_attr_code_decode(&code_dqrr_fqid, p);
5675 +}
5676 +
5677 +uint32_t dpaa2_dq_byte_count(const struct dpaa2_dq *dq)
5678 +{
5679 + const uint32_t *p = qb_cl(dq);
5680 +
5681 + return qb_attr_code_decode(&code_dqrr_byte_count, p);
5682 +}
5683 +
5684 +uint32_t dpaa2_dq_frame_count(const struct dpaa2_dq *dq)
5685 +{
5686 + const uint32_t *p = qb_cl(dq);
5687 +
5688 + return qb_attr_code_decode(&code_dqrr_frame_count, p);
5689 +}
5690 +
5691 +uint64_t dpaa2_dq_fqd_ctx(const struct dpaa2_dq *dq)
5692 +{
5693 + const uint64_t *p = (uint64_t *)qb_cl(dq);
5694 +
5695 + return qb_attr_code_decode_64(&code_dqrr_ctx_lo, p);
5696 +}
5697 +EXPORT_SYMBOL(dpaa2_dq_fqd_ctx);
5698 +
5699 +const struct dpaa2_fd *dpaa2_dq_fd(const struct dpaa2_dq *dq)
5700 +{
5701 + const uint32_t *p = qb_cl(dq);
5702 +
5703 + return (const struct dpaa2_fd *)&p[8];
5704 +}
5705 +EXPORT_SYMBOL(dpaa2_dq_fd);
5706 +
5707 +/**************************************/
5708 +/* Parsing state-change notifications */
5709 +/**************************************/
5710 +
5711 +static struct qb_attr_code code_scn_state = QB_CODE(0, 16, 8);
5712 +static struct qb_attr_code code_scn_rid = QB_CODE(1, 0, 24);
5713 +static struct qb_attr_code code_scn_state_in_mem =
5714 + QB_CODE(0, SCN_STATE_OFFSET_IN_MEM, 8);
5715 +static struct qb_attr_code code_scn_rid_in_mem =
5716 + QB_CODE(1, SCN_RID_OFFSET_IN_MEM, 24);
5717 +static struct qb_attr_code code_scn_ctx_lo = QB_CODE(2, 0, 32);
5718 +
5719 +uint8_t qbman_result_SCN_state(const struct dpaa2_dq *scn)
5720 +{
5721 + const uint32_t *p = qb_cl(scn);
5722 +
5723 + return (uint8_t)qb_attr_code_decode(&code_scn_state, p);
5724 +}
5725 +
5726 +uint32_t qbman_result_SCN_rid(const struct dpaa2_dq *scn)
5727 +{
5728 + const uint32_t *p = qb_cl(scn);
5729 +
5730 + return qb_attr_code_decode(&code_scn_rid, p);
5731 +}
5732 +
5733 +uint64_t qbman_result_SCN_ctx(const struct dpaa2_dq *scn)
5734 +{
5735 + const uint64_t *p = (uint64_t *)qb_cl(scn);
5736 +
5737 + return qb_attr_code_decode_64(&code_scn_ctx_lo, p);
5738 +}
5739 +
5740 +uint8_t qbman_result_SCN_state_in_mem(const struct dpaa2_dq *scn)
5741 +{
5742 + const uint32_t *p = qb_cl(scn);
5743 +
5744 + return (uint8_t)qb_attr_code_decode(&code_scn_state_in_mem, p);
5745 +}
5746 +
5747 +uint32_t qbman_result_SCN_rid_in_mem(const struct dpaa2_dq *scn)
5748 +{
5749 + const uint32_t *p = qb_cl(scn);
5750 + uint32_t result_rid;
5751 +
5752 + result_rid = qb_attr_code_decode(&code_scn_rid_in_mem, p);
5753 + return make_le24(result_rid);
5754 +}
5755 +
5756 +/*****************/
5757 +/* Parsing BPSCN */
5758 +/*****************/
5759 +uint16_t qbman_result_bpscn_bpid(const struct dpaa2_dq *scn)
5760 +{
5761 + return (uint16_t)qbman_result_SCN_rid_in_mem(scn) & 0x3FFF;
5762 +}
5763 +
5764 +int qbman_result_bpscn_has_free_bufs(const struct dpaa2_dq *scn)
5765 +{
5766 + return !(int)(qbman_result_SCN_state_in_mem(scn) & 0x1);
5767 +}
5768 +
5769 +int qbman_result_bpscn_is_depleted(const struct dpaa2_dq *scn)
5770 +{
5771 + return (int)(qbman_result_SCN_state_in_mem(scn) & 0x2);
5772 +}
5773 +
5774 +int qbman_result_bpscn_is_surplus(const struct dpaa2_dq *scn)
5775 +{
5776 + return (int)(qbman_result_SCN_state_in_mem(scn) & 0x4);
5777 +}
5778 +
5779 +uint64_t qbman_result_bpscn_ctx(const struct dpaa2_dq *scn)
5780 +{
5781 + return qbman_result_SCN_ctx(scn);
5782 +}
5783 +
5784 +/*****************/
5785 +/* Parsing CGCU */
5786 +/*****************/
5787 +uint16_t qbman_result_cgcu_cgid(const struct dpaa2_dq *scn)
5788 +{
5789 + return (uint16_t)qbman_result_SCN_rid_in_mem(scn) & 0xFFFF;
5790 +}
5791 +
5792 +uint64_t qbman_result_cgcu_icnt(const struct dpaa2_dq *scn)
5793 +{
5794 + return qbman_result_SCN_ctx(scn) & 0xFFFFFFFFFF;
5795 +}
5796 +
5797 +/******************/
5798 +/* Buffer release */
5799 +/******************/
5800 +
5801 +/* These should be const, eventually */
5802 +/* static struct qb_attr_code code_release_num = QB_CODE(0, 0, 3); */
5803 +static struct qb_attr_code code_release_set_me = QB_CODE(0, 5, 1);
5804 +static struct qb_attr_code code_release_rcdi = QB_CODE(0, 6, 1);
5805 +static struct qb_attr_code code_release_bpid = QB_CODE(0, 16, 16);
5806 +
5807 +void qbman_release_desc_clear(struct qbman_release_desc *d)
5808 +{
5809 + uint32_t *cl;
5810 +
5811 + memset(d, 0, sizeof(*d));
5812 + cl = qb_cl(d);
5813 + qb_attr_code_encode(&code_release_set_me, cl, 1);
5814 +}
5815 +
5816 +void qbman_release_desc_set_bpid(struct qbman_release_desc *d, uint32_t bpid)
5817 +{
5818 + uint32_t *cl = qb_cl(d);
5819 +
5820 + qb_attr_code_encode(&code_release_bpid, cl, bpid);
5821 +}
5822 +
5823 +void qbman_release_desc_set_rcdi(struct qbman_release_desc *d, int enable)
5824 +{
5825 + uint32_t *cl = qb_cl(d);
5826 +
5827 + qb_attr_code_encode(&code_release_rcdi, cl, !!enable);
5828 +}
5829 +
5830 +#define RAR_IDX(rar) ((rar) & 0x7)
5831 +#define RAR_VB(rar) ((rar) & 0x80)
5832 +#define RAR_SUCCESS(rar) ((rar) & 0x100)
5833 +
5834 +int qbman_swp_release(struct qbman_swp *s, const struct qbman_release_desc *d,
5835 + const uint64_t *buffers, unsigned int num_buffers)
5836 +{
5837 + uint32_t *p;
5838 + const uint32_t *cl = qb_cl(d);
5839 + uint32_t rar = qbman_cinh_read(&s->sys, QBMAN_CINH_SWP_RAR);
5840 +
5841 + pr_debug("RAR=%08x\n", rar);
5842 + if (!RAR_SUCCESS(rar))
5843 + return -EBUSY;
5844 + BUG_ON(!num_buffers || (num_buffers > 7));
5845 + /* Start the release command */
5846 + p = qbman_cena_write_start(&s->sys,
5847 + QBMAN_CENA_SWP_RCR(RAR_IDX(rar)));
5848 + /* Copy the caller's buffer pointers to the command */
5849 + u64_to_le32_copy(&p[2], buffers, num_buffers);
5850 + /* Set the verb byte, have to substitute in the valid-bit and the number
5851 + * of buffers. */
5852 + p[0] = cl[0] | RAR_VB(rar) | num_buffers;
5853 + qbman_cena_write_complete(&s->sys,
5854 + QBMAN_CENA_SWP_RCR(RAR_IDX(rar)),
5855 + p);
5856 + return 0;
5857 +}
5858 +
5859 +/*******************/
5860 +/* Buffer acquires */
5861 +/*******************/
5862 +
5863 +/* These should be const, eventually */
5864 +static struct qb_attr_code code_acquire_bpid = QB_CODE(0, 16, 16);
5865 +static struct qb_attr_code code_acquire_num = QB_CODE(1, 0, 3);
5866 +static struct qb_attr_code code_acquire_r_num = QB_CODE(1, 0, 3);
5867 +
5868 +int qbman_swp_acquire(struct qbman_swp *s, uint32_t bpid, uint64_t *buffers,
5869 + unsigned int num_buffers)
5870 +{
5871 + uint32_t *p;
5872 + uint32_t verb, rslt, num;
5873 +
5874 + BUG_ON(!num_buffers || (num_buffers > 7));
5875 +
5876 + /* Start the management command */
5877 + p = qbman_swp_mc_start(s);
5878 +
5879 + if (!p)
5880 + return -EBUSY;
5881 +
5882 + /* Encode the caller-provided attributes */
5883 + qb_attr_code_encode(&code_acquire_bpid, p, bpid);
5884 + qb_attr_code_encode(&code_acquire_num, p, num_buffers);
5885 +
5886 + /* Complete the management command */
5887 + p = qbman_swp_mc_complete(s, p, p[0] | QBMAN_MC_ACQUIRE);
5888 +
5889 + /* Decode the outcome */
5890 + verb = qb_attr_code_decode(&code_generic_verb, p);
5891 + rslt = qb_attr_code_decode(&code_generic_rslt, p);
5892 + num = qb_attr_code_decode(&code_acquire_r_num, p);
5893 + BUG_ON(verb != QBMAN_MC_ACQUIRE);
5894 +
5895 + /* Determine success or failure */
5896 + if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
5897 + pr_err("Acquire buffers from BPID 0x%x failed, code=0x%02x\n",
5898 + bpid, rslt);
5899 + return -EIO;
5900 + }
5901 + BUG_ON(num > num_buffers);
5902 + /* Copy the acquired buffers to the caller's array */
5903 + u64_from_le32_copy(buffers, &p[2], num);
5904 + return (int)num;
5905 +}
5906 +
5907 +/*****************/
5908 +/* FQ management */
5909 +/*****************/
5910 +
5911 +static struct qb_attr_code code_fqalt_fqid = QB_CODE(1, 0, 32);
5912 +
5913 +static int qbman_swp_alt_fq_state(struct qbman_swp *s, uint32_t fqid,
5914 + uint8_t alt_fq_verb)
5915 +{
5916 + uint32_t *p;
5917 + uint32_t verb, rslt;
5918 +
5919 + /* Start the management command */
5920 + p = qbman_swp_mc_start(s);
5921 + if (!p)
5922 + return -EBUSY;
5923 +
5924 + qb_attr_code_encode(&code_fqalt_fqid, p, fqid);
5925 + /* Complete the management command */
5926 + p = qbman_swp_mc_complete(s, p, p[0] | alt_fq_verb);
5927 +
5928 + /* Decode the outcome */
5929 + verb = qb_attr_code_decode(&code_generic_verb, p);
5930 + rslt = qb_attr_code_decode(&code_generic_rslt, p);
5931 + BUG_ON(verb != alt_fq_verb);
5932 +
5933 + /* Determine success or failure */
5934 + if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
5935 + pr_err("ALT FQID %d failed: verb = 0x%08x, code = 0x%02x\n",
5936 + fqid, alt_fq_verb, rslt);
5937 + return -EIO;
5938 + }
5939 +
5940 + return 0;
5941 +}
5942 +
5943 +int qbman_swp_fq_schedule(struct qbman_swp *s, uint32_t fqid)
5944 +{
5945 + return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_SCHEDULE);
5946 +}
5947 +
5948 +int qbman_swp_fq_force(struct qbman_swp *s, uint32_t fqid)
5949 +{
5950 + return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_FORCE);
5951 +}
5952 +
5953 +int qbman_swp_fq_xon(struct qbman_swp *s, uint32_t fqid)
5954 +{
5955 + return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_XON);
5956 +}
5957 +
5958 +int qbman_swp_fq_xoff(struct qbman_swp *s, uint32_t fqid)
5959 +{
5960 + return qbman_swp_alt_fq_state(s, fqid, QBMAN_FQ_XOFF);
5961 +}
5962 +
5963 +/**********************/
5964 +/* Channel management */
5965 +/**********************/
5966 +
5967 +static struct qb_attr_code code_cdan_cid = QB_CODE(0, 16, 12);
5968 +static struct qb_attr_code code_cdan_we = QB_CODE(1, 0, 8);
5969 +static struct qb_attr_code code_cdan_en = QB_CODE(1, 8, 1);
5970 +static struct qb_attr_code code_cdan_ctx_lo = QB_CODE(2, 0, 32);
5971 +
5972 +/* Hide "ICD" for now as we don't use it, don't set it, and don't test it, so it
5973 + * would be irresponsible to expose it. */
5974 +#define CODE_CDAN_WE_EN 0x1
5975 +#define CODE_CDAN_WE_CTX 0x4
5976 +
5977 +static int qbman_swp_CDAN_set(struct qbman_swp *s, uint16_t channelid,
5978 + uint8_t we_mask, uint8_t cdan_en,
5979 + uint64_t ctx)
5980 +{
5981 + uint32_t *p;
5982 + uint32_t verb, rslt;
5983 +
5984 + /* Start the management command */
5985 + p = qbman_swp_mc_start(s);
5986 + if (!p)
5987 + return -EBUSY;
5988 +
5989 + /* Encode the caller-provided attributes */
5990 + qb_attr_code_encode(&code_cdan_cid, p, channelid);
5991 + qb_attr_code_encode(&code_cdan_we, p, we_mask);
5992 + qb_attr_code_encode(&code_cdan_en, p, cdan_en);
5993 + qb_attr_code_encode_64(&code_cdan_ctx_lo, (uint64_t *)p, ctx);
5994 + /* Complete the management command */
5995 + p = qbman_swp_mc_complete(s, p, p[0] | QBMAN_WQCHAN_CONFIGURE);
5996 +
5997 + /* Decode the outcome */
5998 + verb = qb_attr_code_decode(&code_generic_verb, p);
5999 + rslt = qb_attr_code_decode(&code_generic_rslt, p);
6000 + BUG_ON(verb != QBMAN_WQCHAN_CONFIGURE);
6001 +
6002 + /* Determine success or failure */
6003 + if (unlikely(rslt != QBMAN_MC_RSLT_OK)) {
6004 + pr_err("CDAN cQID %d failed: code = 0x%02x\n",
6005 + channelid, rslt);
6006 + return -EIO;
6007 + }
6008 +
6009 + return 0;
6010 +}
6011 +
6012 +int qbman_swp_CDAN_set_context(struct qbman_swp *s, uint16_t channelid,
6013 + uint64_t ctx)
6014 +{
6015 + return qbman_swp_CDAN_set(s, channelid,
6016 + CODE_CDAN_WE_CTX,
6017 + 0, ctx);
6018 +}
6019 +
6020 +int qbman_swp_CDAN_enable(struct qbman_swp *s, uint16_t channelid)
6021 +{
6022 + return qbman_swp_CDAN_set(s, channelid,
6023 + CODE_CDAN_WE_EN,
6024 + 1, 0);
6025 +}
6026 +int qbman_swp_CDAN_disable(struct qbman_swp *s, uint16_t channelid)
6027 +{
6028 + return qbman_swp_CDAN_set(s, channelid,
6029 + CODE_CDAN_WE_EN,
6030 + 0, 0);
6031 +}
6032 +int qbman_swp_CDAN_set_context_enable(struct qbman_swp *s, uint16_t channelid,
6033 + uint64_t ctx)
6034 +{
6035 + return qbman_swp_CDAN_set(s, channelid,
6036 + CODE_CDAN_WE_EN | CODE_CDAN_WE_CTX,
6037 + 1, ctx);
6038 +}
6039 --- /dev/null
6040 +++ b/drivers/staging/fsl-mc/bus/dpio/qbman_portal.h
6041 @@ -0,0 +1,261 @@
6042 +/* Copyright (C) 2014 Freescale Semiconductor, Inc.
6043 + *
6044 + * Redistribution and use in source and binary forms, with or without
6045 + * modification, are permitted provided that the following conditions are met:
6046 + * * Redistributions of source code must retain the above copyright
6047 + * notice, this list of conditions and the following disclaimer.
6048 + * * Redistributions in binary form must reproduce the above copyright
6049 + * notice, this list of conditions and the following disclaimer in the
6050 + * documentation and/or other materials provided with the distribution.
6051 + * * Neither the name of Freescale Semiconductor nor the
6052 + * names of its contributors may be used to endorse or promote products
6053 + * derived from this software without specific prior written permission.
6054 + *
6055 + *
6056 + * ALTERNATIVELY, this software may be distributed under the terms of the
6057 + * GNU General Public License ("GPL") as published by the Free Software
6058 + * Foundation, either version 2 of that License or (at your option) any
6059 + * later version.
6060 + *
6061 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
6062 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
6063 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
6064 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
6065 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
6066 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
6067 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
6068 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
6069 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
6070 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
6071 + */
6072 +
6073 +#include "qbman_private.h"
6074 +#include "fsl_qbman_portal.h"
6075 +#include "../../include/fsl_dpaa2_fd.h"
6076 +
6077 +/* All QBMan command and result structures use this "valid bit" encoding */
6078 +#define QB_VALID_BIT ((uint32_t)0x80)
6079 +
6080 +/* Management command result codes */
6081 +#define QBMAN_MC_RSLT_OK 0xf0
6082 +
6083 +/* TBD: as of QBMan 4.1, DQRR will be 8 rather than 4! */
6084 +#define QBMAN_DQRR_SIZE 4
6085 +
6086 +/* DQRR valid-bit reset bug. See qbman_portal.c::qbman_swp_init(). */
6087 +#define WORKAROUND_DQRR_RESET_BUG
6088 +
6089 +/* --------------------- */
6090 +/* portal data structure */
6091 +/* --------------------- */
6092 +
6093 +struct qbman_swp {
6094 + const struct qbman_swp_desc *desc;
6095 + /* The qbman_sys (ie. arch/OS-specific) support code can put anything it
6096 + * needs in here. */
6097 + struct qbman_swp_sys sys;
6098 + /* Management commands */
6099 + struct {
6100 +#ifdef QBMAN_CHECKING
6101 + enum swp_mc_check {
6102 + swp_mc_can_start, /* call __qbman_swp_mc_start() */
6103 + swp_mc_can_submit, /* call __qbman_swp_mc_submit() */
6104 + swp_mc_can_poll, /* call __qbman_swp_mc_result() */
6105 + } check;
6106 +#endif
6107 + uint32_t valid_bit; /* 0x00 or 0x80 */
6108 + } mc;
6109 + /* Push dequeues */
6110 + uint32_t sdq;
6111 + /* Volatile dequeues */
6112 + struct {
6113 + /* VDQCR supports a "1 deep pipeline", meaning that if you know
6114 + * the last-submitted command is already executing in the
6115 + * hardware (as evidenced by at least 1 valid dequeue result),
6116 + * you can write another dequeue command to the register, the
6117 + * hardware will start executing it as soon as the
6118 + * already-executing command terminates. (This minimises latency
6119 + * and stalls.) With that in mind, this "busy" variable refers
6120 + * to whether or not a command can be submitted, not whether or
6121 + * not a previously-submitted command is still executing. In
6122 + * other words, once proof is seen that the previously-submitted
6123 + * command is executing, "vdq" is no longer "busy".
6124 + */
6125 + atomic_t busy;
6126 + uint32_t valid_bit; /* 0x00 or 0x80 */
6127 + /* We need to determine when vdq is no longer busy. This depends
6128 + * on whether the "busy" (last-submitted) dequeue command is
6129 + * targeting DQRR or main-memory, and detected is based on the
6130 + * presence of the dequeue command's "token" showing up in
6131 + * dequeue entries in DQRR or main-memory (respectively). */
6132 + struct dpaa2_dq *storage; /* NULL if DQRR */
6133 + } vdq;
6134 + /* DQRR */
6135 + struct {
6136 + uint32_t next_idx;
6137 + uint32_t valid_bit;
6138 + uint8_t dqrr_size;
6139 +#ifdef WORKAROUND_DQRR_RESET_BUG
6140 + int reset_bug;
6141 +#endif
6142 + } dqrr;
6143 +};
6144 +
6145 +/* -------------------------- */
6146 +/* portal management commands */
6147 +/* -------------------------- */
6148 +
6149 +/* Different management commands all use this common base layer of code to issue
6150 + * commands and poll for results. The first function returns a pointer to where
6151 + * the caller should fill in their MC command (though they should ignore the
6152 + * verb byte), the second function commits merges in the caller-supplied command
6153 + * verb (which should not include the valid-bit) and submits the command to
6154 + * hardware, and the third function checks for a completed response (returns
6155 + * non-NULL if only if the response is complete). */
6156 +void *qbman_swp_mc_start(struct qbman_swp *p);
6157 +void qbman_swp_mc_submit(struct qbman_swp *p, void *cmd, uint32_t cmd_verb);
6158 +void *qbman_swp_mc_result(struct qbman_swp *p);
6159 +
6160 +/* Wraps up submit + poll-for-result */
6161 +static inline void *qbman_swp_mc_complete(struct qbman_swp *swp, void *cmd,
6162 + uint32_t cmd_verb)
6163 +{
6164 + int loopvar;
6165 +
6166 + qbman_swp_mc_submit(swp, cmd, cmd_verb);
6167 + DBG_POLL_START(loopvar);
6168 + do {
6169 + DBG_POLL_CHECK(loopvar);
6170 + cmd = qbman_swp_mc_result(swp);
6171 + } while (!cmd);
6172 + return cmd;
6173 +}
6174 +
6175 +/* ------------ */
6176 +/* qb_attr_code */
6177 +/* ------------ */
6178 +
6179 +/* This struct locates a sub-field within a QBMan portal (CENA) cacheline which
6180 + * is either serving as a configuration command or a query result. The
6181 + * representation is inherently little-endian, as the indexing of the words is
6182 + * itself little-endian in nature and layerscape is little endian for anything
6183 + * that crosses a word boundary too (64-bit fields are the obvious examples).
6184 + */
6185 +struct qb_attr_code {
6186 + unsigned int word; /* which uint32_t[] array member encodes the field */
6187 + unsigned int lsoffset; /* encoding offset from ls-bit */
6188 + unsigned int width; /* encoding width. (bool must be 1.) */
6189 +};
6190 +
6191 +/* Some pre-defined codes */
6192 +extern struct qb_attr_code code_generic_verb;
6193 +extern struct qb_attr_code code_generic_rslt;
6194 +
6195 +/* Macros to define codes */
6196 +#define QB_CODE(a, b, c) { a, b, c}
6197 +#define QB_CODE_NULL \
6198 + QB_CODE((unsigned int)-1, (unsigned int)-1, (unsigned int)-1)
6199 +
6200 +/* Rotate a code "ms", meaning that it moves from less-significant bytes to
6201 + * more-significant, from less-significant words to more-significant, etc. The
6202 + * "ls" version does the inverse, from more-significant towards
6203 + * less-significant.
6204 + */
6205 +static inline void qb_attr_code_rotate_ms(struct qb_attr_code *code,
6206 + unsigned int bits)
6207 +{
6208 + code->lsoffset += bits;
6209 + while (code->lsoffset > 31) {
6210 + code->word++;
6211 + code->lsoffset -= 32;
6212 + }
6213 +}
6214 +static inline void qb_attr_code_rotate_ls(struct qb_attr_code *code,
6215 + unsigned int bits)
6216 +{
6217 + /* Don't be fooled, this trick should work because the types are
6218 + * unsigned. So the case that interests the while loop (the rotate has
6219 + * gone too far and the word count needs to compensate for it), is
6220 + * manifested when lsoffset is negative. But that equates to a really
6221 + * large unsigned value, starting with lots of "F"s. As such, we can
6222 + * continue adding 32 back to it until it wraps back round above zero,
6223 + * to a value of 31 or less...
6224 + */
6225 + code->lsoffset -= bits;
6226 + while (code->lsoffset > 31) {
6227 + code->word--;
6228 + code->lsoffset += 32;
6229 + }
6230 +}
6231 +/* Implement a loop of code rotations until 'expr' evaluates to FALSE (0). */
6232 +#define qb_attr_code_for_ms(code, bits, expr) \
6233 + for (; expr; qb_attr_code_rotate_ms(code, bits))
6234 +#define qb_attr_code_for_ls(code, bits, expr) \
6235 + for (; expr; qb_attr_code_rotate_ls(code, bits))
6236 +
6237 +/* decode a field from a cacheline */
6238 +static inline uint32_t qb_attr_code_decode(const struct qb_attr_code *code,
6239 + const uint32_t *cacheline)
6240 +{
6241 + return d32_uint32_t(code->lsoffset, code->width, cacheline[code->word]);
6242 +}
6243 +static inline uint64_t qb_attr_code_decode_64(const struct qb_attr_code *code,
6244 + const uint64_t *cacheline)
6245 +{
6246 + uint64_t res;
6247 + u64_from_le32_copy(&res, &cacheline[code->word/2], 1);
6248 + return res;
6249 +}
6250 +
6251 +/* encode a field to a cacheline */
6252 +static inline void qb_attr_code_encode(const struct qb_attr_code *code,
6253 + uint32_t *cacheline, uint32_t val)
6254 +{
6255 + cacheline[code->word] =
6256 + r32_uint32_t(code->lsoffset, code->width, cacheline[code->word])
6257 + | e32_uint32_t(code->lsoffset, code->width, val);
6258 +}
6259 +static inline void qb_attr_code_encode_64(const struct qb_attr_code *code,
6260 + uint64_t *cacheline, uint64_t val)
6261 +{
6262 + u64_to_le32_copy(&cacheline[code->word/2], &val, 1);
6263 +}
6264 +
6265 +/* Small-width signed values (two's-complement) will decode into medium-width
6266 + * positives. (Eg. for an 8-bit signed field, which stores values from -128 to
6267 + * +127, a setting of -7 would appear to decode to the 32-bit unsigned value
6268 + * 249. Likewise -120 would decode as 136.) This function allows the caller to
6269 + * "re-sign" such fields to 32-bit signed. (Eg. -7, which was 249 with an 8-bit
6270 + * encoding, will become 0xfffffff9 if you cast the return value to uint32_t).
6271 + */
6272 +static inline int32_t qb_attr_code_makesigned(const struct qb_attr_code *code,
6273 + uint32_t val)
6274 +{
6275 + BUG_ON(val >= (1 << code->width));
6276 + /* If the high bit was set, it was encoding a negative */
6277 + if (val >= (1 << (code->width - 1)))
6278 + return (int32_t)0 - (int32_t)(((uint32_t)1 << code->width) -
6279 + val);
6280 + /* Otherwise, it was encoding a positive */
6281 + return (int32_t)val;
6282 +}
6283 +
6284 +/* ---------------------- */
6285 +/* Descriptors/cachelines */
6286 +/* ---------------------- */
6287 +
6288 +/* To avoid needless dynamic allocation, the driver API often gives the caller
6289 + * a "descriptor" type that the caller can instantiate however they like.
6290 + * Ultimately though, it is just a cacheline of binary storage (or something
6291 + * smaller when it is known that the descriptor doesn't need all 64 bytes) for
6292 + * holding pre-formatted pieces of hardware commands. The performance-critical
6293 + * code can then copy these descriptors directly into hardware command
6294 + * registers more efficiently than trying to construct/format commands
6295 + * on-the-fly. The API user sees the descriptor as an array of 32-bit words in
6296 + * order for the compiler to know its size, but the internal details are not
6297 + * exposed. The following macro is used within the driver for converting *any*
6298 + * descriptor pointer to a usable array pointer. The use of a macro (instead of
6299 + * an inline) is necessary to work with different descriptor types and to work
6300 + * correctly with const and non-const inputs (and similarly-qualified outputs).
6301 + */
6302 +#define qb_cl(d) (&(d)->dont_manipulate_directly[0])
6303 --- /dev/null
6304 +++ b/drivers/staging/fsl-mc/bus/dpio/qbman_private.h
6305 @@ -0,0 +1,173 @@
6306 +/* Copyright (C) 2014 Freescale Semiconductor, Inc.
6307 + *
6308 + * Redistribution and use in source and binary forms, with or without
6309 + * modification, are permitted provided that the following conditions are met:
6310 + * * Redistributions of source code must retain the above copyright
6311 + * notice, this list of conditions and the following disclaimer.
6312 + * * Redistributions in binary form must reproduce the above copyright
6313 + * notice, this list of conditions and the following disclaimer in the
6314 + * documentation and/or other materials provided with the distribution.
6315 + * * Neither the name of Freescale Semiconductor nor the
6316 + * names of its contributors may be used to endorse or promote products
6317 + * derived from this software without specific prior written permission.
6318 + *
6319 + *
6320 + * ALTERNATIVELY, this software may be distributed under the terms of the
6321 + * GNU General Public License ("GPL") as published by the Free Software
6322 + * Foundation, either version 2 of that License or (at your option) any
6323 + * later version.
6324 + *
6325 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
6326 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
6327 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
6328 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
6329 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
6330 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
6331 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
6332 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
6333 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
6334 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
6335 +*/
6336 +
6337 +/* Perform extra checking */
6338 +#define QBMAN_CHECKING
6339 +
6340 +/* To maximise the amount of logic that is common between the Linux driver and
6341 + * other targets (such as the embedded MC firmware), we pivot here between the
6342 + * inclusion of two platform-specific headers.
6343 + *
6344 + * The first, qbman_sys_decl.h, includes any and all required system headers as
6345 + * well as providing any definitions for the purposes of compatibility. The
6346 + * second, qbman_sys.h, is where platform-specific routines go.
6347 + *
6348 + * The point of the split is that the platform-independent code (including this
6349 + * header) may depend on platform-specific declarations, yet other
6350 + * platform-specific routines may depend on platform-independent definitions.
6351 + */
6352 +
6353 +#include "qbman_sys_decl.h"
6354 +
6355 +#define QMAN_REV_4000 0x04000000
6356 +#define QMAN_REV_4100 0x04010000
6357 +#define QMAN_REV_4101 0x04010001
6358 +
6359 +/* When things go wrong, it is a convenient trick to insert a few FOO()
6360 + * statements in the code to trace progress. TODO: remove this once we are
6361 + * hacking the code less actively.
6362 + */
6363 +#define FOO() fsl_os_print("FOO: %s:%d\n", __FILE__, __LINE__)
6364 +
6365 +/* Any time there is a register interface which we poll on, this provides a
6366 + * "break after x iterations" scheme for it. It's handy for debugging, eg.
6367 + * where you don't want millions of lines of log output from a polling loop
6368 + * that won't, because such things tend to drown out the earlier log output
6369 + * that might explain what caused the problem. (NB: put ";" after each macro!)
6370 + * TODO: we should probably remove this once we're done sanitising the
6371 + * simulator...
6372 + */
6373 +#define DBG_POLL_START(loopvar) (loopvar = 10)
6374 +#define DBG_POLL_CHECK(loopvar) \
6375 + do {if (!(loopvar--)) BUG_ON(1); } while (0)
6376 +
6377 +/* For CCSR or portal-CINH registers that contain fields at arbitrary offsets
6378 + * and widths, these macro-generated encode/decode/isolate/remove inlines can
6379 + * be used.
6380 + *
6381 + * Eg. to "d"ecode a 14-bit field out of a register (into a "uint16_t" type),
6382 + * where the field is located 3 bits "up" from the least-significant bit of the
6383 + * register (ie. the field location within the 32-bit register corresponds to a
6384 + * mask of 0x0001fff8), you would do;
6385 + * uint16_t field = d32_uint16_t(3, 14, reg_value);
6386 + *
6387 + * Or to "e"ncode a 1-bit boolean value (input type is "int", zero is FALSE,
6388 + * non-zero is TRUE, so must convert all non-zero inputs to 1, hence the "!!"
6389 + * operator) into a register at bit location 0x00080000 (19 bits "in" from the
6390 + * LS bit), do;
6391 + * reg_value |= e32_int(19, 1, !!field);
6392 + *
6393 + * If you wish to read-modify-write a register, such that you leave the 14-bit
6394 + * field as-is but have all other fields set to zero, then "i"solate the 14-bit
6395 + * value using;
6396 + * reg_value = i32_uint16_t(3, 14, reg_value);
6397 + *
6398 + * Alternatively, you could "r"emove the 1-bit boolean field (setting it to
6399 + * zero) but leaving all other fields as-is;
6400 + * reg_val = r32_int(19, 1, reg_value);
6401 + *
6402 + */
6403 +#define MAKE_MASK32(width) (width == 32 ? 0xffffffff : \
6404 + (uint32_t)((1 << width) - 1))
6405 +#define DECLARE_CODEC32(t) \
6406 +static inline uint32_t e32_##t(uint32_t lsoffset, uint32_t width, t val) \
6407 +{ \
6408 + BUG_ON(width > (sizeof(t) * 8)); \
6409 + return ((uint32_t)val & MAKE_MASK32(width)) << lsoffset; \
6410 +} \
6411 +static inline t d32_##t(uint32_t lsoffset, uint32_t width, uint32_t val) \
6412 +{ \
6413 + BUG_ON(width > (sizeof(t) * 8)); \
6414 + return (t)((val >> lsoffset) & MAKE_MASK32(width)); \
6415 +} \
6416 +static inline uint32_t i32_##t(uint32_t lsoffset, uint32_t width, \
6417 + uint32_t val) \
6418 +{ \
6419 + BUG_ON(width > (sizeof(t) * 8)); \
6420 + return e32_##t(lsoffset, width, d32_##t(lsoffset, width, val)); \
6421 +} \
6422 +static inline uint32_t r32_##t(uint32_t lsoffset, uint32_t width, \
6423 + uint32_t val) \
6424 +{ \
6425 + BUG_ON(width > (sizeof(t) * 8)); \
6426 + return ~(MAKE_MASK32(width) << lsoffset) & val; \
6427 +}
6428 +DECLARE_CODEC32(uint32_t)
6429 +DECLARE_CODEC32(uint16_t)
6430 +DECLARE_CODEC32(uint8_t)
6431 +DECLARE_CODEC32(int)
6432 +
6433 + /*********************/
6434 + /* Debugging assists */
6435 + /*********************/
6436 +
6437 +static inline void __hexdump(unsigned long start, unsigned long end,
6438 + unsigned long p, size_t sz, const unsigned char *c)
6439 +{
6440 + while (start < end) {
6441 + unsigned int pos = 0;
6442 + char buf[64];
6443 + int nl = 0;
6444 +
6445 + pos += sprintf(buf + pos, "%08lx: ", start);
6446 + do {
6447 + if ((start < p) || (start >= (p + sz)))
6448 + pos += sprintf(buf + pos, "..");
6449 + else
6450 + pos += sprintf(buf + pos, "%02x", *(c++));
6451 + if (!(++start & 15)) {
6452 + buf[pos++] = '\n';
6453 + nl = 1;
6454 + } else {
6455 + nl = 0;
6456 + if (!(start & 1))
6457 + buf[pos++] = ' ';
6458 + if (!(start & 3))
6459 + buf[pos++] = ' ';
6460 + }
6461 + } while (start & 15);
6462 + if (!nl)
6463 + buf[pos++] = '\n';
6464 + buf[pos] = '\0';
6465 + pr_info("%s", buf);
6466 + }
6467 +}
6468 +static inline void hexdump(const void *ptr, size_t sz)
6469 +{
6470 + unsigned long p = (unsigned long)ptr;
6471 + unsigned long start = p & ~(unsigned long)15;
6472 + unsigned long end = (p + sz + 15) & ~(unsigned long)15;
6473 + const unsigned char *c = ptr;
6474 +
6475 + __hexdump(start, end, p, sz, c);
6476 +}
6477 +
6478 +#include "qbman_sys.h"
6479 --- /dev/null
6480 +++ b/drivers/staging/fsl-mc/bus/dpio/qbman_sys.h
6481 @@ -0,0 +1,307 @@
6482 +/* Copyright (C) 2014 Freescale Semiconductor, Inc.
6483 + *
6484 + * Redistribution and use in source and binary forms, with or without
6485 + * modification, are permitted provided that the following conditions are met:
6486 + * * Redistributions of source code must retain the above copyright
6487 + * notice, this list of conditions and the following disclaimer.
6488 + * * Redistributions in binary form must reproduce the above copyright
6489 + * notice, this list of conditions and the following disclaimer in the
6490 + * documentation and/or other materials provided with the distribution.
6491 + * * Neither the name of Freescale Semiconductor nor the
6492 + * names of its contributors may be used to endorse or promote products
6493 + * derived from this software without specific prior written permission.
6494 + *
6495 + *
6496 + * ALTERNATIVELY, this software may be distributed under the terms of the
6497 + * GNU General Public License ("GPL") as published by the Free Software
6498 + * Foundation, either version 2 of that License or (at your option) any
6499 + * later version.
6500 + *
6501 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
6502 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
6503 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
6504 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
6505 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
6506 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
6507 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
6508 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
6509 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
6510 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
6511 + */
6512 +/* qbman_sys_decl.h and qbman_sys.h are the two platform-specific files in the
6513 + * driver. They are only included via qbman_private.h, which is itself a
6514 + * platform-independent file and is included by all the other driver source.
6515 + *
6516 + * qbman_sys_decl.h is included prior to all other declarations and logic, and
6517 + * it exists to provide compatibility with any linux interfaces our
6518 + * single-source driver code is dependent on (eg. kmalloc). Ie. this file
6519 + * provides linux compatibility.
6520 + *
6521 + * This qbman_sys.h header, on the other hand, is included *after* any common
6522 + * and platform-neutral declarations and logic in qbman_private.h, and exists to
6523 + * implement any platform-specific logic of the qbman driver itself. Ie. it is
6524 + * *not* to provide linux compatibility.
6525 + */
6526 +
6527 +/* Trace the 3 different classes of read/write access to QBMan. #undef as
6528 + * required. */
6529 +#undef QBMAN_CCSR_TRACE
6530 +#undef QBMAN_CINH_TRACE
6531 +#undef QBMAN_CENA_TRACE
6532 +
6533 +static inline void word_copy(void *d, const void *s, unsigned int cnt)
6534 +{
6535 + uint32_t *dd = d;
6536 + const uint32_t *ss = s;
6537 +
6538 + while (cnt--)
6539 + *(dd++) = *(ss++);
6540 +}
6541 +
6542 +/* Currently, the CENA support code expects each 32-bit word to be written in
6543 + * host order, and these are converted to hardware (little-endian) order on
6544 + * command submission. However, 64-bit quantities are must be written (and read)
6545 + * as two 32-bit words with the least-significant word first, irrespective of
6546 + * host endianness. */
6547 +static inline void u64_to_le32_copy(void *d, const uint64_t *s,
6548 + unsigned int cnt)
6549 +{
6550 + uint32_t *dd = d;
6551 + const uint32_t *ss = (const uint32_t *)s;
6552 +
6553 + while (cnt--) {
6554 + /* TBD: the toolchain was choking on the use of 64-bit types up
6555 + * until recently so this works entirely with 32-bit variables.
6556 + * When 64-bit types become usable again, investigate better
6557 + * ways of doing this. */
6558 +#if defined(__BIG_ENDIAN)
6559 + *(dd++) = ss[1];
6560 + *(dd++) = ss[0];
6561 + ss += 2;
6562 +#else
6563 + *(dd++) = *(ss++);
6564 + *(dd++) = *(ss++);
6565 +#endif
6566 + }
6567 +}
6568 +static inline void u64_from_le32_copy(uint64_t *d, const void *s,
6569 + unsigned int cnt)
6570 +{
6571 + const uint32_t *ss = s;
6572 + uint32_t *dd = (uint32_t *)d;
6573 +
6574 + while (cnt--) {
6575 +#if defined(__BIG_ENDIAN)
6576 + dd[1] = *(ss++);
6577 + dd[0] = *(ss++);
6578 + dd += 2;
6579 +#else
6580 + *(dd++) = *(ss++);
6581 + *(dd++) = *(ss++);
6582 +#endif
6583 + }
6584 +}
6585 +
6586 +/* Convert a host-native 32bit value into little endian */
6587 +#if defined(__BIG_ENDIAN)
6588 +static inline uint32_t make_le32(uint32_t val)
6589 +{
6590 + return ((val & 0xff) << 24) | ((val & 0xff00) << 8) |
6591 + ((val & 0xff0000) >> 8) | ((val & 0xff000000) >> 24);
6592 +}
6593 +static inline uint32_t make_le24(uint32_t val)
6594 +{
6595 + return (((val & 0xff) << 16) | (val & 0xff00) |
6596 + ((val & 0xff0000) >> 16));
6597 +}
6598 +#else
6599 +#define make_le32(val) (val)
6600 +#define make_le24(val) (val)
6601 +#endif
6602 +static inline void make_le32_n(uint32_t *val, unsigned int num)
6603 +{
6604 + while (num--) {
6605 + *val = make_le32(*val);
6606 + val++;
6607 + }
6608 +}
6609 +
6610 + /******************/
6611 + /* Portal access */
6612 + /******************/
6613 +struct qbman_swp_sys {
6614 + /* On GPP, the sys support for qbman_swp is here. The CENA region isi
6615 + * not an mmap() of the real portal registers, but an allocated
6616 + * place-holder, because the actual writes/reads to/from the portal are
6617 + * marshalled from these allocated areas using QBMan's "MC access
6618 + * registers". CINH accesses are atomic so there's no need for a
6619 + * place-holder. */
6620 + void *cena;
6621 + void __iomem *addr_cena;
6622 + void __iomem *addr_cinh;
6623 +};
6624 +
6625 +/* P_OFFSET is (ACCESS_CMD,0,12) - offset within the portal
6626 + * C is (ACCESS_CMD,12,1) - is inhibited? (0==CENA, 1==CINH)
6627 + * SWP_IDX is (ACCESS_CMD,16,10) - Software portal index
6628 + * P is (ACCESS_CMD,28,1) - (0==special portal, 1==any portal)
6629 + * T is (ACCESS_CMD,29,1) - Command type (0==READ, 1==WRITE)
6630 + * E is (ACCESS_CMD,31,1) - Command execute (1 to issue, poll for 0==complete)
6631 + */
6632 +
6633 +static inline void qbman_cinh_write(struct qbman_swp_sys *s, uint32_t offset,
6634 + uint32_t val)
6635 +{
6636 +
6637 + writel_relaxed(val, s->addr_cinh + offset);
6638 +#ifdef QBMAN_CINH_TRACE
6639 + pr_info("qbman_cinh_write(%p:0x%03x) 0x%08x\n",
6640 + s->addr_cinh, offset, val);
6641 +#endif
6642 +}
6643 +
6644 +static inline uint32_t qbman_cinh_read(struct qbman_swp_sys *s, uint32_t offset)
6645 +{
6646 + uint32_t reg = readl_relaxed(s->addr_cinh + offset);
6647 +
6648 +#ifdef QBMAN_CINH_TRACE
6649 + pr_info("qbman_cinh_read(%p:0x%03x) 0x%08x\n",
6650 + s->addr_cinh, offset, reg);
6651 +#endif
6652 + return reg;
6653 +}
6654 +
6655 +static inline void *qbman_cena_write_start(struct qbman_swp_sys *s,
6656 + uint32_t offset)
6657 +{
6658 + void *shadow = s->cena + offset;
6659 +
6660 +#ifdef QBMAN_CENA_TRACE
6661 + pr_info("qbman_cena_write_start(%p:0x%03x) %p\n",
6662 + s->addr_cena, offset, shadow);
6663 +#endif
6664 + BUG_ON(offset & 63);
6665 + dcbz(shadow);
6666 + return shadow;
6667 +}
6668 +
6669 +static inline void qbman_cena_write_complete(struct qbman_swp_sys *s,
6670 + uint32_t offset, void *cmd)
6671 +{
6672 + const uint32_t *shadow = cmd;
6673 + int loop;
6674 +
6675 +#ifdef QBMAN_CENA_TRACE
6676 + pr_info("qbman_cena_write_complete(%p:0x%03x) %p\n",
6677 + s->addr_cena, offset, shadow);
6678 + hexdump(cmd, 64);
6679 +#endif
6680 + for (loop = 15; loop >= 1; loop--)
6681 + writel_relaxed(shadow[loop], s->addr_cena +
6682 + offset + loop * 4);
6683 + lwsync();
6684 + writel_relaxed(shadow[0], s->addr_cena + offset);
6685 + dcbf(s->addr_cena + offset);
6686 +}
6687 +
6688 +static inline void *qbman_cena_read(struct qbman_swp_sys *s, uint32_t offset)
6689 +{
6690 + uint32_t *shadow = s->cena + offset;
6691 + unsigned int loop;
6692 +
6693 +#ifdef QBMAN_CENA_TRACE
6694 + pr_info("qbman_cena_read(%p:0x%03x) %p\n",
6695 + s->addr_cena, offset, shadow);
6696 +#endif
6697 +
6698 + for (loop = 0; loop < 16; loop++)
6699 + shadow[loop] = readl_relaxed(s->addr_cena + offset
6700 + + loop * 4);
6701 +#ifdef QBMAN_CENA_TRACE
6702 + hexdump(shadow, 64);
6703 +#endif
6704 + return shadow;
6705 +}
6706 +
6707 +static inline void qbman_cena_invalidate_prefetch(struct qbman_swp_sys *s,
6708 + uint32_t offset)
6709 +{
6710 + dcivac(s->addr_cena + offset);
6711 + prefetch_for_load(s->addr_cena + offset);
6712 +}
6713 +
6714 + /******************/
6715 + /* Portal support */
6716 + /******************/
6717 +
6718 +/* The SWP_CFG portal register is special, in that it is used by the
6719 + * platform-specific code rather than the platform-independent code in
6720 + * qbman_portal.c. So use of it is declared locally here. */
6721 +#define QBMAN_CINH_SWP_CFG 0xd00
6722 +
6723 +/* For MC portal use, we always configure with
6724 + * DQRR_MF is (SWP_CFG,20,3) - DQRR max fill (<- 0x4)
6725 + * EST is (SWP_CFG,16,3) - EQCR_CI stashing threshold (<- 0x0)
6726 + * RPM is (SWP_CFG,12,2) - RCR production notification mode (<- 0x3)
6727 + * DCM is (SWP_CFG,10,2) - DQRR consumption notification mode (<- 0x2)
6728 + * EPM is (SWP_CFG,8,2) - EQCR production notification mode (<- 0x3)
6729 + * SD is (SWP_CFG,5,1) - memory stashing drop enable (<- FALSE)
6730 + * SP is (SWP_CFG,4,1) - memory stashing priority (<- TRUE)
6731 + * SE is (SWP_CFG,3,1) - memory stashing enable (<- 0x0)
6732 + * DP is (SWP_CFG,2,1) - dequeue stashing priority (<- TRUE)
6733 + * DE is (SWP_CFG,1,1) - dequeue stashing enable (<- 0x0)
6734 + * EP is (SWP_CFG,0,1) - EQCR_CI stashing priority (<- FALSE)
6735 + */
6736 +static inline uint32_t qbman_set_swp_cfg(uint8_t max_fill, uint8_t wn,
6737 + uint8_t est, uint8_t rpm, uint8_t dcm,
6738 + uint8_t epm, int sd, int sp, int se,
6739 + int dp, int de, int ep)
6740 +{
6741 + uint32_t reg;
6742 +
6743 + reg = e32_uint8_t(20, (uint32_t)(3 + (max_fill >> 3)), max_fill) |
6744 + e32_uint8_t(16, 3, est) | e32_uint8_t(12, 2, rpm) |
6745 + e32_uint8_t(10, 2, dcm) | e32_uint8_t(8, 2, epm) |
6746 + e32_int(5, 1, sd) | e32_int(4, 1, sp) | e32_int(3, 1, se) |
6747 + e32_int(2, 1, dp) | e32_int(1, 1, de) | e32_int(0, 1, ep) |
6748 + e32_uint8_t(14, 1, wn);
6749 + return reg;
6750 +}
6751 +
6752 +static inline int qbman_swp_sys_init(struct qbman_swp_sys *s,
6753 + const struct qbman_swp_desc *d,
6754 + uint8_t dqrr_size)
6755 +{
6756 + uint32_t reg;
6757 +
6758 + s->addr_cena = d->cena_bar;
6759 + s->addr_cinh = d->cinh_bar;
6760 + s->cena = (void *)get_zeroed_page(GFP_KERNEL);
6761 + if (!s->cena) {
6762 + pr_err("Could not allocate page for cena shadow\n");
6763 + return -1;
6764 + }
6765 +
6766 +#ifdef QBMAN_CHECKING
6767 + /* We should never be asked to initialise for a portal that isn't in
6768 + * the power-on state. (Ie. don't forget to reset portals when they are
6769 + * decommissioned!)
6770 + */
6771 + reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
6772 + BUG_ON(reg);
6773 +#endif
6774 + reg = qbman_set_swp_cfg(dqrr_size, 0, 0, 3, 2, 3, 0, 1, 0, 1, 0, 0);
6775 + qbman_cinh_write(s, QBMAN_CINH_SWP_CFG, reg);
6776 + reg = qbman_cinh_read(s, QBMAN_CINH_SWP_CFG);
6777 + if (!reg) {
6778 + pr_err("The portal is not enabled!\n");
6779 + kfree(s->cena);
6780 + return -1;
6781 + }
6782 + return 0;
6783 +}
6784 +
6785 +static inline void qbman_swp_sys_finish(struct qbman_swp_sys *s)
6786 +{
6787 + free_page((unsigned long)s->cena);
6788 +}
6789 --- /dev/null
6790 +++ b/drivers/staging/fsl-mc/bus/dpio/qbman_sys_decl.h
6791 @@ -0,0 +1,86 @@
6792 +/* Copyright (C) 2014 Freescale Semiconductor, Inc.
6793 + *
6794 + * Redistribution and use in source and binary forms, with or without
6795 + * modification, are permitted provided that the following conditions are met:
6796 + * * Redistributions of source code must retain the above copyright
6797 + * notice, this list of conditions and the following disclaimer.
6798 + * * Redistributions in binary form must reproduce the above copyright
6799 + * notice, this list of conditions and the following disclaimer in the
6800 + * documentation and/or other materials provided with the distribution.
6801 + * * Neither the name of Freescale Semiconductor nor the
6802 + * names of its contributors may be used to endorse or promote products
6803 + * derived from this software without specific prior written permission.
6804 + *
6805 + *
6806 + * ALTERNATIVELY, this software may be distributed under the terms of the
6807 + * GNU General Public License ("GPL") as published by the Free Software
6808 + * Foundation, either version 2 of that License or (at your option) any
6809 + * later version.
6810 + *
6811 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
6812 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
6813 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
6814 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
6815 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
6816 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
6817 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
6818 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
6819 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
6820 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
6821 + */
6822 +#include <linux/kernel.h>
6823 +#include <linux/errno.h>
6824 +#include <linux/io.h>
6825 +#include <linux/dma-mapping.h>
6826 +#include <linux/bootmem.h>
6827 +#include <linux/slab.h>
6828 +#include <linux/module.h>
6829 +#include <linux/init.h>
6830 +#include <linux/interrupt.h>
6831 +#include <linux/delay.h>
6832 +#include <linux/memblock.h>
6833 +#include <linux/completion.h>
6834 +#include <linux/log2.h>
6835 +#include <linux/types.h>
6836 +#include <linux/ioctl.h>
6837 +#include <linux/device.h>
6838 +#include <linux/smp.h>
6839 +#include <linux/vmalloc.h>
6840 +#include "fsl_qbman_base.h"
6841 +
6842 +/* The platform-independent code shouldn't need endianness, except for
6843 + * weird/fast-path cases like qbman_result_has_token(), which needs to
6844 + * perform a passive and endianness-specific test on a read-only data structure
6845 + * very quickly. It's an exception, and this symbol is used for that case. */
6846 +#if defined(__BIG_ENDIAN)
6847 +#define DQRR_TOK_OFFSET 0
6848 +#define QBMAN_RESULT_VERB_OFFSET_IN_MEM 24
6849 +#define SCN_STATE_OFFSET_IN_MEM 8
6850 +#define SCN_RID_OFFSET_IN_MEM 8
6851 +#else
6852 +#define DQRR_TOK_OFFSET 24
6853 +#define QBMAN_RESULT_VERB_OFFSET_IN_MEM 0
6854 +#define SCN_STATE_OFFSET_IN_MEM 16
6855 +#define SCN_RID_OFFSET_IN_MEM 0
6856 +#endif
6857 +
6858 +/* Similarly-named functions */
6859 +#define upper32(a) upper_32_bits(a)
6860 +#define lower32(a) lower_32_bits(a)
6861 +
6862 + /****************/
6863 + /* arch assists */
6864 + /****************/
6865 +
6866 +#define dcbz(p) { asm volatile("dc zva, %0" : : "r" (p) : "memory"); }
6867 +#define lwsync() { asm volatile("dmb st" : : : "memory"); }
6868 +#define dcbf(p) { asm volatile("dc cvac, %0;" : : "r" (p) : "memory"); }
6869 +#define dcivac(p) { asm volatile("dc ivac, %0" : : "r"(p) : "memory"); }
6870 +static inline void prefetch_for_load(void *p)
6871 +{
6872 + asm volatile("prfm pldl1keep, [%0, #64]" : : "r" (p));
6873 +}
6874 +static inline void prefetch_for_store(void *p)
6875 +{
6876 + asm volatile("prfm pstl1keep, [%0, #64]" : : "r" (p));
6877 +}
6878 --- /dev/null
6879 +++ b/drivers/staging/fsl-mc/bus/dpio/qbman_test.c
6880 @@ -0,0 +1,664 @@
6881 +/* Copyright (C) 2014 Freescale Semiconductor, Inc.
6882 + *
6883 + * Redistribution and use in source and binary forms, with or without
6884 + * modification, are permitted provided that the following conditions are met:
6885 + * * Redistributions of source code must retain the above copyright
6886 + * notice, this list of conditions and the following disclaimer.
6887 + * * Redistributions in binary form must reproduce the above copyright
6888 + * notice, this list of conditions and the following disclaimer in the
6889 + * documentation and/or other materials provided with the distribution.
6890 + * * Neither the name of Freescale Semiconductor nor the
6891 + * names of its contributors may be used to endorse or promote products
6892 + * derived from this software without specific prior written permission.
6893 + *
6894 + *
6895 + * ALTERNATIVELY, this software may be distributed under the terms of the
6896 + * GNU General Public License ("GPL") as published by the Free Software
6897 + * Foundation, either version 2 of that License or (at your option) any
6898 + * later version.
6899 + *
6900 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
6901 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
6902 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
6903 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
6904 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
6905 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
6906 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
6907 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
6908 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
6909 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
6910 + */
6911 +
6912 +#include <linux/kernel.h>
6913 +#include <linux/io.h>
6914 +#include <linux/module.h>
6915 +
6916 +#include "qbman_private.h"
6917 +#include "fsl_qbman_portal.h"
6918 +#include "qbman_debug.h"
6919 +#include "../../include/fsl_dpaa2_fd.h"
6920 +
6921 +#define QBMAN_SWP_CENA_BASE 0x818000000
6922 +#define QBMAN_SWP_CINH_BASE 0x81c000000
6923 +
6924 +#define QBMAN_PORTAL_IDX 2
6925 +#define QBMAN_TEST_FQID 19
6926 +#define QBMAN_TEST_BPID 23
6927 +#define QBMAN_USE_QD
6928 +#ifdef QBMAN_USE_QD
6929 +#define QBMAN_TEST_QDID 1
6930 +#endif
6931 +#define QBMAN_TEST_LFQID 0xf00010
6932 +
6933 +#define NUM_EQ_FRAME 10
6934 +#define NUM_DQ_FRAME 10
6935 +#define NUM_DQ_IN_DQRR 5
6936 +#define NUM_DQ_IN_MEM (NUM_DQ_FRAME - NUM_DQ_IN_DQRR)
6937 +
6938 +static struct qbman_swp *swp;
6939 +static struct qbman_eq_desc eqdesc;
6940 +static struct qbman_pull_desc pulldesc;
6941 +static struct qbman_release_desc releasedesc;
6942 +static struct qbman_eq_response eq_storage[1];
6943 +static struct dpaa2_dq dq_storage[NUM_DQ_IN_MEM] __aligned(64);
6944 +static dma_addr_t eq_storage_phys;
6945 +static dma_addr_t dq_storage_phys;
6946 +
6947 +/* FQ ctx attribute values for the test code. */
6948 +#define FQCTX_HI 0xabbaf00d
6949 +#define FQCTX_LO 0x98765432
6950 +#define FQ_VFQID 0x123456
6951 +
6952 +/* Sample frame descriptor */
6953 +static struct qbman_fd_simple fd = {
6954 + .addr_lo = 0xbabaf33d,
6955 + .addr_hi = 0x01234567,
6956 + .len = 0x7777,
6957 + .frc = 0xdeadbeef,
6958 + .flc_lo = 0xcafecafe,
6959 + .flc_hi = 0xbeadabba
6960 +};
6961 +
6962 +static void fd_inc(struct qbman_fd_simple *_fd)
6963 +{
6964 + _fd->addr_lo += _fd->len;
6965 + _fd->flc_lo += 0x100;
6966 + _fd->frc += 0x10;
6967 +}
6968 +
6969 +static int fd_cmp(struct qbman_fd *fda, struct qbman_fd *fdb)
6970 +{
6971 + int i;
6972 +
6973 + for (i = 0; i < 8; i++)
6974 + if (fda->words[i] - fdb->words[i])
6975 + return 1;
6976 + return 0;
6977 +}
6978 +
6979 +struct qbman_fd fd_eq[NUM_EQ_FRAME];
6980 +struct qbman_fd fd_dq[NUM_DQ_FRAME];
6981 +
6982 +/* "Buffers" to be released (and storage for buffers to be acquired) */
6983 +static uint64_t rbufs[320];
6984 +static uint64_t abufs[320];
6985 +
6986 +static void do_enqueue(struct qbman_swp *swp)
6987 +{
6988 + int i, j, ret;
6989 +
6990 +#ifdef QBMAN_USE_QD
6991 + pr_info("*****QBMan_test: Enqueue %d frames to QD %d\n",
6992 + NUM_EQ_FRAME, QBMAN_TEST_QDID);
6993 +#else
6994 + pr_info("*****QBMan_test: Enqueue %d frames to FQ %d\n",
6995 + NUM_EQ_FRAME, QBMAN_TEST_FQID);
6996 +#endif
6997 + for (i = 0; i < NUM_EQ_FRAME; i++) {
6998 + /*********************************/
6999 + /* Prepare a enqueue descriptor */
7000 + /*********************************/
7001 + memset(eq_storage, 0, sizeof(eq_storage));
7002 + eq_storage_phys = virt_to_phys(eq_storage);
7003 + qbman_eq_desc_clear(&eqdesc);
7004 + qbman_eq_desc_set_no_orp(&eqdesc, 0);
7005 + qbman_eq_desc_set_response(&eqdesc, eq_storage_phys, 0);
7006 + qbman_eq_desc_set_token(&eqdesc, 0x99);
7007 +#ifdef QBMAN_USE_QD
7008 + /**********************************/
7009 + /* Prepare a Queueing Destination */
7010 + /**********************************/
7011 + qbman_eq_desc_set_qd(&eqdesc, QBMAN_TEST_QDID, 0, 3);
7012 +#else
7013 + qbman_eq_desc_set_fq(&eqdesc, QBMAN_TEST_FQID);
7014 +#endif
7015 +
7016 + /******************/
7017 + /* Try an enqueue */
7018 + /******************/
7019 + ret = qbman_swp_enqueue(swp, &eqdesc,
7020 + (const struct qbman_fd *)&fd);
7021 + BUG_ON(ret);
7022 + for (j = 0; j < 8; j++)
7023 + fd_eq[i].words[j] = *((uint32_t *)&fd + j);
7024 + fd_inc(&fd);
7025 + }
7026 +}
7027 +
7028 +static void do_push_dequeue(struct qbman_swp *swp)
7029 +{
7030 + int i, j;
7031 + const struct dpaa2_dq *dq_storage1;
7032 + const struct qbman_fd *__fd;
7033 + int loopvar;
7034 +
7035 + pr_info("*****QBMan_test: Start push dequeue\n");
7036 + for (i = 0; i < NUM_DQ_FRAME; i++) {
7037 + DBG_POLL_START(loopvar);
7038 + do {
7039 + DBG_POLL_CHECK(loopvar);
7040 + dq_storage1 = qbman_swp_dqrr_next(swp);
7041 + } while (!dq_storage1);
7042 + if (dq_storage1) {
7043 + __fd = (const struct qbman_fd *)
7044 + dpaa2_dq_fd(dq_storage1);
7045 + for (j = 0; j < 8; j++)
7046 + fd_dq[i].words[j] = __fd->words[j];
7047 + if (fd_cmp(&fd_eq[i], &fd_dq[i])) {
7048 + pr_info("enqueue FD is\n");
7049 + hexdump(&fd_eq[i], 32);
7050 + pr_info("dequeue FD is\n");
7051 + hexdump(&fd_dq[i], 32);
7052 + }
7053 + qbman_swp_dqrr_consume(swp, dq_storage1);
7054 + } else {
7055 + pr_info("The push dequeue fails\n");
7056 + }
7057 + }
7058 +}
7059 +
7060 +static void do_pull_dequeue(struct qbman_swp *swp)
7061 +{
7062 + int i, j, ret;
7063 + const struct dpaa2_dq *dq_storage1;
7064 + const struct qbman_fd *__fd;
7065 + int loopvar;
7066 +
7067 + pr_info("*****QBMan_test: Dequeue %d frames with dq entry in DQRR\n",
7068 + NUM_DQ_IN_DQRR);
7069 + for (i = 0; i < NUM_DQ_IN_DQRR; i++) {
7070 + qbman_pull_desc_clear(&pulldesc);
7071 + qbman_pull_desc_set_storage(&pulldesc, NULL, 0, 0);
7072 + qbman_pull_desc_set_numframes(&pulldesc, 1);
7073 + qbman_pull_desc_set_fq(&pulldesc, QBMAN_TEST_FQID);
7074 +
7075 + ret = qbman_swp_pull(swp, &pulldesc);
7076 + BUG_ON(ret);
7077 + DBG_POLL_START(loopvar);
7078 + do {
7079 + DBG_POLL_CHECK(loopvar);
7080 + dq_storage1 = qbman_swp_dqrr_next(swp);
7081 + } while (!dq_storage1);
7082 +
7083 + if (dq_storage1) {
7084 + __fd = (const struct qbman_fd *)
7085 + dpaa2_dq_fd(dq_storage1);
7086 + for (j = 0; j < 8; j++)
7087 + fd_dq[i].words[j] = __fd->words[j];
7088 + if (fd_cmp(&fd_eq[i], &fd_dq[i])) {
7089 + pr_info("enqueue FD is\n");
7090 + hexdump(&fd_eq[i], 32);
7091 + pr_info("dequeue FD is\n");
7092 + hexdump(&fd_dq[i], 32);
7093 + }
7094 + qbman_swp_dqrr_consume(swp, dq_storage1);
7095 + } else {
7096 + pr_info("Dequeue with dq entry in DQRR fails\n");
7097 + }
7098 + }
7099 +
7100 + pr_info("*****QBMan_test: Dequeue %d frames with dq entry in memory\n",
7101 + NUM_DQ_IN_MEM);
7102 + for (i = 0; i < NUM_DQ_IN_MEM; i++) {
7103 + dq_storage_phys = virt_to_phys(&dq_storage[i]);
7104 + qbman_pull_desc_clear(&pulldesc);
7105 + qbman_pull_desc_set_storage(&pulldesc, &dq_storage[i],
7106 + dq_storage_phys, 1);
7107 + qbman_pull_desc_set_numframes(&pulldesc, 1);
7108 + qbman_pull_desc_set_fq(&pulldesc, QBMAN_TEST_FQID);
7109 + ret = qbman_swp_pull(swp, &pulldesc);
7110 + BUG_ON(ret);
7111 +
7112 + DBG_POLL_START(loopvar);
7113 + do {
7114 + DBG_POLL_CHECK(loopvar);
7115 + ret = qbman_result_has_new_result(swp,
7116 + &dq_storage[i]);
7117 + } while (!ret);
7118 +
7119 + if (ret) {
7120 + for (j = 0; j < 8; j++)
7121 + fd_dq[i + NUM_DQ_IN_DQRR].words[j] =
7122 + dq_storage[i].dont_manipulate_directly[j + 8];
7123 + j = i + NUM_DQ_IN_DQRR;
7124 + if (fd_cmp(&fd_eq[j], &fd_dq[j])) {
7125 + pr_info("enqueue FD is\n");
7126 + hexdump(&fd_eq[i + NUM_DQ_IN_DQRR], 32);
7127 + pr_info("dequeue FD is\n");
7128 + hexdump(&fd_dq[i + NUM_DQ_IN_DQRR], 32);
7129 + hexdump(&dq_storage[i], 64);
7130 + }
7131 + } else {
7132 + pr_info("Dequeue with dq entry in memory fails\n");
7133 + }
7134 + }
7135 +}
7136 +
7137 +static void release_buffer(struct qbman_swp *swp, unsigned int num)
7138 +{
7139 + int ret;
7140 + unsigned int i, j;
7141 +
7142 + qbman_release_desc_clear(&releasedesc);
7143 + qbman_release_desc_set_bpid(&releasedesc, QBMAN_TEST_BPID);
7144 + pr_info("*****QBMan_test: Release %d buffers to BP %d\n",
7145 + num, QBMAN_TEST_BPID);
7146 + for (i = 0; i < (num / 7 + 1); i++) {
7147 + j = ((num - i * 7) > 7) ? 7 : (num - i * 7);
7148 + ret = qbman_swp_release(swp, &releasedesc, &rbufs[i * 7], j);
7149 + BUG_ON(ret);
7150 + }
7151 +}
7152 +
7153 +static void acquire_buffer(struct qbman_swp *swp, unsigned int num)
7154 +{
7155 + int ret;
7156 + unsigned int i, j;
7157 +
7158 + pr_info("*****QBMan_test: Acquire %d buffers from BP %d\n",
7159 + num, QBMAN_TEST_BPID);
7160 +
7161 + for (i = 0; i < (num / 7 + 1); i++) {
7162 + j = ((num - i * 7) > 7) ? 7 : (num - i * 7);
7163 + ret = qbman_swp_acquire(swp, QBMAN_TEST_BPID, &abufs[i * 7], j);
7164 + BUG_ON(ret != j);
7165 + }
7166 +}
7167 +
7168 +static void buffer_pool_test(struct qbman_swp *swp)
7169 +{
7170 + struct qbman_attr info;
7171 + struct dpaa2_dq *bpscn_message;
7172 + dma_addr_t bpscn_phys;
7173 + uint64_t bpscn_ctx;
7174 + uint64_t ctx = 0xbbccddaadeadbeefull;
7175 + int i, ret;
7176 + uint32_t hw_targ;
7177 +
7178 + pr_info("*****QBMan_test: test buffer pool management\n");
7179 + ret = qbman_bp_query(swp, QBMAN_TEST_BPID, &info);
7180 + qbman_bp_attr_get_bpscn_addr(&info, &bpscn_phys);
7181 + pr_info("The bpscn is %llx, info_phys is %llx\n", bpscn_phys,
7182 + virt_to_phys(&info));
7183 + bpscn_message = phys_to_virt(bpscn_phys);
7184 +
7185 + for (i = 0; i < 320; i++)
7186 + rbufs[i] = 0xf00dabba01234567ull + i * 0x40;
7187 +
7188 + release_buffer(swp, 320);
7189 +
7190 + pr_info("QBMan_test: query the buffer pool\n");
7191 + qbman_bp_query(swp, QBMAN_TEST_BPID, &info);
7192 + hexdump(&info, 64);
7193 + qbman_bp_attr_get_hw_targ(&info, &hw_targ);
7194 + pr_info("hw_targ is %d\n", hw_targ);
7195 +
7196 + /* Acquire buffers to trigger BPSCN */
7197 + acquire_buffer(swp, 300);
7198 + /* BPSCN should be written to the memory */
7199 + qbman_bp_query(swp, QBMAN_TEST_BPID, &info);
7200 + hexdump(&info, 64);
7201 + hexdump(bpscn_message, 64);
7202 + BUG_ON(!qbman_result_is_BPSCN(bpscn_message));
7203 + /* There should be free buffers in the pool */
7204 + BUG_ON(!(qbman_result_bpscn_has_free_bufs(bpscn_message)));
7205 + /* Buffer pool is depleted */
7206 + BUG_ON(!qbman_result_bpscn_is_depleted(bpscn_message));
7207 + /* The ctx should match */
7208 + bpscn_ctx = qbman_result_bpscn_ctx(bpscn_message);
7209 + pr_info("BPSCN test: ctx %llx, bpscn_ctx %llx\n", ctx, bpscn_ctx);
7210 + BUG_ON(ctx != bpscn_ctx);
7211 + memset(bpscn_message, 0, sizeof(struct dpaa2_dq));
7212 +
7213 + /* Re-seed the buffer pool to trigger BPSCN */
7214 + release_buffer(swp, 240);
7215 + /* BPSCN should be written to the memory */
7216 + BUG_ON(!qbman_result_is_BPSCN(bpscn_message));
7217 + /* There should be free buffers in the pool */
7218 + BUG_ON(!(qbman_result_bpscn_has_free_bufs(bpscn_message)));
7219 + /* Buffer pool is not depleted */
7220 + BUG_ON(qbman_result_bpscn_is_depleted(bpscn_message));
7221 + memset(bpscn_message, 0, sizeof(struct dpaa2_dq));
7222 +
7223 + acquire_buffer(swp, 260);
7224 + /* BPSCN should be written to the memory */
7225 + BUG_ON(!qbman_result_is_BPSCN(bpscn_message));
7226 + /* There should be free buffers in the pool while BPSCN generated */
7227 + BUG_ON(!(qbman_result_bpscn_has_free_bufs(bpscn_message)));
7228 + /* Buffer pool is depletion */
7229 + BUG_ON(!qbman_result_bpscn_is_depleted(bpscn_message));
7230 +}
7231 +
7232 +static void ceetm_test(struct qbman_swp *swp)
7233 +{
7234 + int i, j, ret;
7235 +
7236 + qbman_eq_desc_clear(&eqdesc);
7237 + qbman_eq_desc_set_no_orp(&eqdesc, 0);
7238 + qbman_eq_desc_set_fq(&eqdesc, QBMAN_TEST_LFQID);
7239 + pr_info("*****QBMan_test: Enqueue to LFQID %x\n",
7240 + QBMAN_TEST_LFQID);
7241 + for (i = 0; i < NUM_EQ_FRAME; i++) {
7242 + ret = qbman_swp_enqueue(swp, &eqdesc,
7243 + (const struct qbman_fd *)&fd);
7244 + BUG_ON(ret);
7245 + for (j = 0; j < 8; j++)
7246 + fd_eq[i].words[j] = *((uint32_t *)&fd + j);
7247 + fd_inc(&fd);
7248 + }
7249 +}
7250 +
7251 +int qbman_test(void)
7252 +{
7253 + struct qbman_swp_desc pd;
7254 + uint32_t reg;
7255 +
7256 + pd.cena_bar = ioremap_cache_ns(QBMAN_SWP_CENA_BASE +
7257 + QBMAN_PORTAL_IDX * 0x10000, 0x10000);
7258 + pd.cinh_bar = ioremap(QBMAN_SWP_CINH_BASE +
7259 + QBMAN_PORTAL_IDX * 0x10000, 0x10000);
7260 +
7261 + /* Detect whether the mc image is the test image with GPP setup */
7262 + reg = readl_relaxed(pd.cena_bar + 0x4);
7263 + if (reg != 0xdeadbeef) {
7264 + pr_err("The MC image doesn't have GPP test setup, stop!\n");
7265 + iounmap(pd.cena_bar);
7266 + iounmap(pd.cinh_bar);
7267 + return -1;
7268 + }
7269 +
7270 + pr_info("*****QBMan_test: Init QBMan SWP %d\n", QBMAN_PORTAL_IDX);
7271 + swp = qbman_swp_init(&pd);
7272 + if (!swp) {
7273 + iounmap(pd.cena_bar);
7274 + iounmap(pd.cinh_bar);
7275 + return -1;
7276 + }
7277 +
7278 + /*******************/
7279 + /* Enqueue frames */
7280 + /*******************/
7281 + do_enqueue(swp);
7282 +
7283 + /*******************/
7284 + /* Do pull dequeue */
7285 + /*******************/
7286 + do_pull_dequeue(swp);
7287 +
7288 + /*******************/
7289 + /* Enqueue frames */
7290 + /*******************/
7291 + qbman_swp_push_set(swp, 0, 1);
7292 + qbman_swp_fq_schedule(swp, QBMAN_TEST_FQID);
7293 + do_enqueue(swp);
7294 +
7295 + /*******************/
7296 + /* Do push dequeue */
7297 + /*******************/
7298 + do_push_dequeue(swp);
7299 +
7300 + /**************************/
7301 + /* Test buffer pool funcs */
7302 + /**************************/
7303 + buffer_pool_test(swp);
7304 +
7305 + /******************/
7306 + /* CEETM test */
7307 + /******************/
7308 + ceetm_test(swp);
7309 +
7310 + qbman_swp_finish(swp);
7311 + pr_info("*****QBMan_test: Kernel test Passed\n");
7312 + return 0;
7313 +}
7314 +
7315 +/* user-space test-case, definitions:
7316 + *
7317 + * 1 portal only, using portal index 3.
7318 + */
7319 +
7320 +#include <linux/uaccess.h>
7321 +#include <linux/ioctl.h>
7322 +#include <linux/miscdevice.h>
7323 +#include <linux/fs.h>
7324 +#include <linux/cdev.h>
7325 +#include <linux/mm.h>
7326 +#include <linux/mman.h>
7327 +
7328 +#define QBMAN_TEST_US_SWP 3 /* portal index for user space */
7329 +
7330 +#define QBMAN_TEST_MAGIC 'q'
7331 +struct qbman_test_swp_ioctl {
7332 + unsigned long portal1_cinh;
7333 + unsigned long portal1_cena;
7334 +};
7335 +struct qbman_test_dma_ioctl {
7336 + unsigned long ptr;
7337 + uint64_t phys_addr;
7338 +};
7339 +
7340 +struct qbman_test_priv {
7341 + int has_swp_map;
7342 + int has_dma_map;
7343 + unsigned long pgoff;
7344 +};
7345 +
7346 +#define QBMAN_TEST_SWP_MAP \
7347 + _IOR(QBMAN_TEST_MAGIC, 0x01, struct qbman_test_swp_ioctl)
7348 +#define QBMAN_TEST_SWP_UNMAP \
7349 + _IOR(QBMAN_TEST_MAGIC, 0x02, struct qbman_test_swp_ioctl)
7350 +#define QBMAN_TEST_DMA_MAP \
7351 + _IOR(QBMAN_TEST_MAGIC, 0x03, struct qbman_test_dma_ioctl)
7352 +#define QBMAN_TEST_DMA_UNMAP \
7353 + _IOR(QBMAN_TEST_MAGIC, 0x04, struct qbman_test_dma_ioctl)
7354 +
7355 +#define TEST_PORTAL1_CENA_PGOFF ((QBMAN_SWP_CENA_BASE + QBMAN_TEST_US_SWP * \
7356 + 0x10000) >> PAGE_SHIFT)
7357 +#define TEST_PORTAL1_CINH_PGOFF ((QBMAN_SWP_CINH_BASE + QBMAN_TEST_US_SWP * \
7358 + 0x10000) >> PAGE_SHIFT)
7359 +
7360 +static int qbman_test_open(struct inode *inode, struct file *filp)
7361 +{
7362 + struct qbman_test_priv *priv;
7363 +
7364 + priv = kmalloc(sizeof(struct qbman_test_priv), GFP_KERNEL);
7365 + if (!priv)
7366 + return -EIO;
7367 + filp->private_data = priv;
7368 + priv->has_swp_map = 0;
7369 + priv->has_dma_map = 0;
7370 + priv->pgoff = 0;
7371 + return 0;
7372 +}
7373 +
7374 +static int qbman_test_mmap(struct file *filp, struct vm_area_struct *vma)
7375 +{
7376 + int ret;
7377 + struct qbman_test_priv *priv = filp->private_data;
7378 +
7379 + BUG_ON(!priv);
7380 +
7381 + if (vma->vm_pgoff == TEST_PORTAL1_CINH_PGOFF)
7382 + vma->vm_page_prot = pgprot_noncached(vma->vm_page_prot);
7383 + else if (vma->vm_pgoff == TEST_PORTAL1_CENA_PGOFF)
7384 + vma->vm_page_prot = pgprot_cached_ns(vma->vm_page_prot);
7385 + else if (vma->vm_pgoff == priv->pgoff)
7386 + vma->vm_page_prot = pgprot_cached(vma->vm_page_prot);
7387 + else {
7388 + pr_err("Damn, unrecognised pg_off!!\n");
7389 + return -EINVAL;
7390 + }
7391 + ret = remap_pfn_range(vma, vma->vm_start, vma->vm_pgoff,
7392 + vma->vm_end - vma->vm_start,
7393 + vma->vm_page_prot);
7394 + return ret;
7395 +}
7396 +
7397 +static long qbman_test_ioctl(struct file *fp, unsigned int cmd,
7398 + unsigned long arg)
7399 +{
7400 + void __user *a = (void __user *)arg;
7401 + unsigned long longret, populate;
7402 + int ret = 0;
7403 + struct qbman_test_priv *priv = fp->private_data;
7404 +
7405 + BUG_ON(!priv);
7406 +
7407 + switch (cmd) {
7408 + case QBMAN_TEST_SWP_MAP:
7409 + {
7410 + struct qbman_test_swp_ioctl params;
7411 +
7412 + if (priv->has_swp_map)
7413 + return -EINVAL;
7414 + down_write(&current->mm->mmap_sem);
7415 + /* Map portal1 CINH */
7416 + longret = do_mmap_pgoff(fp, PAGE_SIZE, 0x10000,
7417 + PROT_READ | PROT_WRITE, MAP_SHARED,
7418 + TEST_PORTAL1_CINH_PGOFF, &populate);
7419 + if (longret & ~PAGE_MASK) {
7420 + ret = (int)longret;
7421 + goto out;
7422 + }
7423 + params.portal1_cinh = longret;
7424 + /* Map portal1 CENA */
7425 + longret = do_mmap_pgoff(fp, PAGE_SIZE, 0x10000,
7426 + PROT_READ | PROT_WRITE, MAP_SHARED,
7427 + TEST_PORTAL1_CENA_PGOFF, &populate);
7428 + if (longret & ~PAGE_MASK) {
7429 + ret = (int)longret;
7430 + goto out;
7431 + }
7432 + params.portal1_cena = longret;
7433 + priv->has_swp_map = 1;
7434 +out:
7435 + up_write(&current->mm->mmap_sem);
7436 + if (!ret && copy_to_user(a, &params, sizeof(params)))
7437 + return -EFAULT;
7438 + return ret;
7439 + }
7440 + case QBMAN_TEST_SWP_UNMAP:
7441 + {
7442 + struct qbman_test_swp_ioctl params;
7443 +
7444 + if (!priv->has_swp_map)
7445 + return -EINVAL;
7446 +
7447 + if (copy_from_user(&params, a, sizeof(params)))
7448 + return -EFAULT;
7449 + down_write(&current->mm->mmap_sem);
7450 + do_munmap(current->mm, params.portal1_cena, 0x10000);
7451 + do_munmap(current->mm, params.portal1_cinh, 0x10000);
7452 + up_write(&current->mm->mmap_sem);
7453 + priv->has_swp_map = 0;
7454 + return 0;
7455 + }
7456 + case QBMAN_TEST_DMA_MAP:
7457 + {
7458 + struct qbman_test_dma_ioctl params;
7459 + void *vaddr;
7460 +
7461 + if (priv->has_dma_map)
7462 + return -EINVAL;
7463 + vaddr = (void *)get_zeroed_page(GFP_KERNEL);
7464 + params.phys_addr = virt_to_phys(vaddr);
7465 + priv->pgoff = (unsigned long)params.phys_addr >> PAGE_SHIFT;
7466 + down_write(&current->mm->mmap_sem);
7467 + longret = do_mmap_pgoff(fp, PAGE_SIZE, PAGE_SIZE,
7468 + PROT_READ | PROT_WRITE, MAP_SHARED,
7469 + priv->pgoff, &populate);
7470 + if (longret & ~PAGE_MASK) {
7471 + ret = (int)longret;
7472 + return ret;
7473 + }
7474 + params.ptr = longret;
7475 + priv->has_dma_map = 1;
7476 + up_write(&current->mm->mmap_sem);
7477 + if (copy_to_user(a, &params, sizeof(params)))
7478 + return -EFAULT;
7479 + return 0;
7480 + }
7481 + case QBMAN_TEST_DMA_UNMAP:
7482 + {
7483 + struct qbman_test_dma_ioctl params;
7484 +
7485 + if (!priv->has_dma_map)
7486 + return -EINVAL;
7487 + if (copy_from_user(&params, a, sizeof(params)))
7488 + return -EFAULT;
7489 + down_write(&current->mm->mmap_sem);
7490 + do_munmap(current->mm, params.ptr, PAGE_SIZE);
7491 + up_write(&current->mm->mmap_sem);
7492 + free_page((unsigned long)phys_to_virt(params.phys_addr));
7493 + priv->has_dma_map = 0;
7494 + return 0;
7495 + }
7496 + default:
7497 + pr_err("Bad ioctl cmd!\n");
7498 + }
7499 + return -EINVAL;
7500 +}
7501 +
7502 +static const struct file_operations qbman_fops = {
7503 + .open = qbman_test_open,
7504 + .mmap = qbman_test_mmap,
7505 + .unlocked_ioctl = qbman_test_ioctl
7506 +};
7507 +
7508 +static struct miscdevice qbman_miscdev = {
7509 + .name = "qbman-test",
7510 + .fops = &qbman_fops,
7511 + .minor = MISC_DYNAMIC_MINOR,
7512 +};
7513 +
7514 +static int qbman_miscdev_init;
7515 +
7516 +static int test_init(void)
7517 +{
7518 + int ret = qbman_test();
7519 +
7520 + if (!ret) {
7521 + /* MC image supports the test cases, so instantiate the
7522 + * character devic that the user-space test case will use to do
7523 + * its memory mappings. */
7524 + ret = misc_register(&qbman_miscdev);
7525 + if (ret) {
7526 + pr_err("qbman-test: failed to register misc device\n");
7527 + return ret;
7528 + }
7529 + pr_info("qbman-test: misc device registered!\n");
7530 + qbman_miscdev_init = 1;
7531 + }
7532 + return 0;
7533 +}
7534 +
7535 +static void test_exit(void)
7536 +{
7537 + if (qbman_miscdev_init) {
7538 + misc_deregister(&qbman_miscdev);
7539 + qbman_miscdev_init = 0;
7540 + }
7541 +}
7542 +
7543 +module_init(test_init);
7544 +module_exit(test_exit);
7545 --- /dev/null
7546 +++ b/drivers/staging/fsl-mc/include/fsl_dpaa2_fd.h
7547 @@ -0,0 +1,774 @@
7548 +/* Copyright 2014 Freescale Semiconductor Inc.
7549 + *
7550 + * Redistribution and use in source and binary forms, with or without
7551 + * modification, are permitted provided that the following conditions are met:
7552 + * * Redistributions of source code must retain the above copyright
7553 + * notice, this list of conditions and the following disclaimer.
7554 + * * Redistributions in binary form must reproduce the above copyright
7555 + * notice, this list of conditions and the following disclaimer in the
7556 + * documentation and/or other materials provided with the distribution.
7557 + * * Neither the name of Freescale Semiconductor nor the
7558 + * names of its contributors may be used to endorse or promote products
7559 + * derived from this software without specific prior written permission.
7560 + *
7561 + *
7562 + * ALTERNATIVELY, this software may be distributed under the terms of the
7563 + * GNU General Public License ("GPL") as published by the Free Software
7564 + * Foundation, either version 2 of that License or (at your option) any
7565 + * later version.
7566 + *
7567 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
7568 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
7569 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
7570 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
7571 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
7572 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
7573 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
7574 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
7575 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
7576 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
7577 + */
7578 +#ifndef __FSL_DPAA2_FD_H
7579 +#define __FSL_DPAA2_FD_H
7580 +
7581 +/**
7582 + * DOC: DPAA2 FD - Frame Descriptor APIs for DPAA2
7583 + *
7584 + * Frame Descriptors (FDs) are used to describe frame data in the DPAA2.
7585 + * Frames can be enqueued and dequeued to Frame Queues which are consumed
7586 + * by the various DPAA accelerators (WRIOP, SEC, PME, DCE)
7587 + *
7588 + * There are three types of frames: Single, Scatter Gather and Frame Lists.
7589 + *
7590 + * The set of APIs in this file must be used to create, manipulate and
7591 + * query Frame Descriptor.
7592 + *
7593 + */
7594 +
7595 +/**
7596 + * struct dpaa2_fd - Place-holder for FDs.
7597 + * @words: for easier/faster copying the whole FD structure.
7598 + * @addr_lo: the lower 32 bits of the address in FD.
7599 + * @addr_hi: the upper 32 bits of the address in FD.
7600 + * @len: the length field in FD.
7601 + * @bpid_offset: represent the bpid and offset fields in FD
7602 + * @frc: frame context
7603 + * @ctrl: the 32bit control bits including dd, sc,... va, err.
7604 + * @flc_lo: the lower 32bit of flow context.
7605 + * @flc_hi: the upper 32bits of flow context.
7606 + *
7607 + * This structure represents the basic Frame Descriptor used in the system.
7608 + * We represent it via the simplest form that we need for now. Different
7609 + * overlays may be needed to support different options, etc. (It is impractical
7610 + * to define One True Struct, because the resulting encoding routines (lots of
7611 + * read-modify-writes) would be worst-case performance whether or not
7612 + * circumstances required them.)
7613 + */
7614 +struct dpaa2_fd {
7615 + union {
7616 + u32 words[8];
7617 + struct dpaa2_fd_simple {
7618 + u32 addr_lo;
7619 + u32 addr_hi;
7620 + u32 len;
7621 + /* offset in the MS 16 bits, BPID in the LS 16 bits */
7622 + u32 bpid_offset;
7623 + u32 frc; /* frame context */
7624 + /* "err", "va", "cbmt", "asal", [...] */
7625 + u32 ctrl;
7626 + /* flow context */
7627 + u32 flc_lo;
7628 + u32 flc_hi;
7629 + } simple;
7630 + };
7631 +};
7632 +
7633 +enum dpaa2_fd_format {
7634 + dpaa2_fd_single = 0,
7635 + dpaa2_fd_list,
7636 + dpaa2_fd_sg
7637 +};
7638 +
7639 +/* Accessors for SG entry fields
7640 + *
7641 + * These setters and getters assume little endian format. For converting
7642 + * between LE and cpu endianness, the specific conversion functions must be
7643 + * called before the SGE contents are accessed by the core (on Rx),
7644 + * respectively before the SG table is sent to hardware (on Tx)
7645 + */
7646 +
7647 +/**
7648 + * dpaa2_fd_get_addr() - get the addr field of frame descriptor
7649 + * @fd: the given frame descriptor.
7650 + *
7651 + * Return the address in the frame descriptor.
7652 + */
7653 +static inline dma_addr_t dpaa2_fd_get_addr(const struct dpaa2_fd *fd)
7654 +{
7655 + return (dma_addr_t)((((uint64_t)fd->simple.addr_hi) << 32)
7656 + + fd->simple.addr_lo);
7657 +}
7658 +
7659 +/**
7660 + * dpaa2_fd_set_addr() - Set the addr field of frame descriptor
7661 + * @fd: the given frame descriptor.
7662 + * @addr: the address needs to be set in frame descriptor.
7663 + */
7664 +static inline void dpaa2_fd_set_addr(struct dpaa2_fd *fd, dma_addr_t addr)
7665 +{
7666 + fd->simple.addr_hi = upper_32_bits(addr);
7667 + fd->simple.addr_lo = lower_32_bits(addr);
7668 +}
7669 +
7670 +/**
7671 + * dpaa2_fd_get_frc() - Get the frame context in the frame descriptor
7672 + * @fd: the given frame descriptor.
7673 + *
7674 + * Return the frame context field in the frame descriptor.
7675 + */
7676 +static inline u32 dpaa2_fd_get_frc(const struct dpaa2_fd *fd)
7677 +{
7678 + return fd->simple.frc;
7679 +}
7680 +
7681 +/**
7682 + * dpaa2_fd_set_frc() - Set the frame context in the frame descriptor
7683 + * @fd: the given frame descriptor.
7684 + * @frc: the frame context needs to be set in frame descriptor.
7685 + */
7686 +static inline void dpaa2_fd_set_frc(struct dpaa2_fd *fd, u32 frc)
7687 +{
7688 + fd->simple.frc = frc;
7689 +}
7690 +
7691 +/**
7692 + * dpaa2_fd_get_flc() - Get the flow context in the frame descriptor
7693 + * @fd: the given frame descriptor.
7694 + *
7695 + * Return the flow context in the frame descriptor.
7696 + */
7697 +static inline dma_addr_t dpaa2_fd_get_flc(const struct dpaa2_fd *fd)
7698 +{
7699 + return (dma_addr_t)((((uint64_t)fd->simple.flc_hi) << 32) +
7700 + fd->simple.flc_lo);
7701 +}
7702 +
7703 +/**
7704 + * dpaa2_fd_set_flc() - Set the flow context field of frame descriptor
7705 + * @fd: the given frame descriptor.
7706 + * @flc_addr: the flow context needs to be set in frame descriptor.
7707 + */
7708 +static inline void dpaa2_fd_set_flc(struct dpaa2_fd *fd, dma_addr_t flc_addr)
7709 +{
7710 + fd->simple.flc_hi = upper_32_bits(flc_addr);
7711 + fd->simple.flc_lo = lower_32_bits(flc_addr);
7712 +}
7713 +
7714 +/**
7715 + * dpaa2_fd_get_len() - Get the length in the frame descriptor
7716 + * @fd: the given frame descriptor.
7717 + *
7718 + * Return the length field in the frame descriptor.
7719 + */
7720 +static inline u32 dpaa2_fd_get_len(const struct dpaa2_fd *fd)
7721 +{
7722 + return fd->simple.len;
7723 +}
7724 +
7725 +/**
7726 + * dpaa2_fd_set_len() - Set the length field of frame descriptor
7727 + * @fd: the given frame descriptor.
7728 + * @len: the length needs to be set in frame descriptor.
7729 + */
7730 +static inline void dpaa2_fd_set_len(struct dpaa2_fd *fd, u32 len)
7731 +{
7732 + fd->simple.len = len;
7733 +}
7734 +
7735 +/**
7736 + * dpaa2_fd_get_offset() - Get the offset field in the frame descriptor
7737 + * @fd: the given frame descriptor.
7738 + *
7739 + * Return the offset.
7740 + */
7741 +static inline uint16_t dpaa2_fd_get_offset(const struct dpaa2_fd *fd)
7742 +{
7743 + return (uint16_t)(fd->simple.bpid_offset >> 16) & 0x0FFF;
7744 +}
7745 +
7746 +/**
7747 + * dpaa2_fd_set_offset() - Set the offset field of frame descriptor
7748 + *
7749 + * @fd: the given frame descriptor.
7750 + * @offset: the offset needs to be set in frame descriptor.
7751 + */
7752 +static inline void dpaa2_fd_set_offset(struct dpaa2_fd *fd, uint16_t offset)
7753 +{
7754 + fd->simple.bpid_offset &= 0xF000FFFF;
7755 + fd->simple.bpid_offset |= (u32)offset << 16;
7756 +}
7757 +
7758 +/**
7759 + * dpaa2_fd_get_format() - Get the format field in the frame descriptor
7760 + * @fd: the given frame descriptor.
7761 + *
7762 + * Return the format.
7763 + */
7764 +static inline enum dpaa2_fd_format dpaa2_fd_get_format(
7765 + const struct dpaa2_fd *fd)
7766 +{
7767 + return (enum dpaa2_fd_format)((fd->simple.bpid_offset >> 28) & 0x3);
7768 +}
7769 +
7770 +/**
7771 + * dpaa2_fd_set_format() - Set the format field of frame descriptor
7772 + *
7773 + * @fd: the given frame descriptor.
7774 + * @format: the format needs to be set in frame descriptor.
7775 + */
7776 +static inline void dpaa2_fd_set_format(struct dpaa2_fd *fd,
7777 + enum dpaa2_fd_format format)
7778 +{
7779 + fd->simple.bpid_offset &= 0xCFFFFFFF;
7780 + fd->simple.bpid_offset |= (u32)format << 28;
7781 +}
7782 +
7783 +/**
7784 + * dpaa2_fd_get_bpid() - Get the bpid field in the frame descriptor
7785 + * @fd: the given frame descriptor.
7786 + *
7787 + * Return the bpid.
7788 + */
7789 +static inline uint16_t dpaa2_fd_get_bpid(const struct dpaa2_fd *fd)
7790 +{
7791 + return (uint16_t)(fd->simple.bpid_offset & 0xFFFF);
7792 +}
7793 +
7794 +/**
7795 + * dpaa2_fd_set_bpid() - Set the bpid field of frame descriptor
7796 + *
7797 + * @fd: the given frame descriptor.
7798 + * @bpid: the bpid needs to be set in frame descriptor.
7799 + */
7800 +static inline void dpaa2_fd_set_bpid(struct dpaa2_fd *fd, uint16_t bpid)
7801 +{
7802 + fd->simple.bpid_offset &= 0xFFFF0000;
7803 + fd->simple.bpid_offset |= (u32)bpid;
7804 +}
7805 +
7806 +/**
7807 + * struct dpaa2_sg_entry - the scatter-gathering structure
7808 + * @addr_lo: the lower 32bit of address
7809 + * @addr_hi: the upper 32bit of address
7810 + * @len: the length in this sg entry.
7811 + * @bpid_offset: offset in the MS 16 bits, BPID in the LS 16 bits.
7812 + */
7813 +struct dpaa2_sg_entry {
7814 + u32 addr_lo;
7815 + u32 addr_hi;
7816 + u32 len;
7817 + u32 bpid_offset;
7818 +};
7819 +
7820 +enum dpaa2_sg_format {
7821 + dpaa2_sg_single = 0,
7822 + dpaa2_sg_frame_data,
7823 + dpaa2_sg_sgt_ext
7824 +};
7825 +
7826 +/**
7827 + * dpaa2_sg_get_addr() - Get the address from SG entry
7828 + * @sg: the given scatter-gathering object.
7829 + *
7830 + * Return the address.
7831 + */
7832 +static inline dma_addr_t dpaa2_sg_get_addr(const struct dpaa2_sg_entry *sg)
7833 +{
7834 + return (dma_addr_t)((((u64)sg->addr_hi) << 32) + sg->addr_lo);
7835 +}
7836 +
7837 +/**
7838 + * dpaa2_sg_set_addr() - Set the address in SG entry
7839 + * @sg: the given scatter-gathering object.
7840 + * @addr: the address to be set.
7841 + */
7842 +static inline void dpaa2_sg_set_addr(struct dpaa2_sg_entry *sg, dma_addr_t addr)
7843 +{
7844 + sg->addr_hi = upper_32_bits(addr);
7845 + sg->addr_lo = lower_32_bits(addr);
7846 +}
7847 +
7848 +
7849 +static inline bool dpaa2_sg_short_len(const struct dpaa2_sg_entry *sg)
7850 +{
7851 + return (sg->bpid_offset >> 30) & 0x1;
7852 +}
7853 +
7854 +/**
7855 + * dpaa2_sg_get_len() - Get the length in SG entry
7856 + * @sg: the given scatter-gathering object.
7857 + *
7858 + * Return the length.
7859 + */
7860 +static inline u32 dpaa2_sg_get_len(const struct dpaa2_sg_entry *sg)
7861 +{
7862 + if (dpaa2_sg_short_len(sg))
7863 + return sg->len & 0x1FFFF;
7864 + return sg->len;
7865 +}
7866 +
7867 +/**
7868 + * dpaa2_sg_set_len() - Set the length in SG entry
7869 + * @sg: the given scatter-gathering object.
7870 + * @len: the length to be set.
7871 + */
7872 +static inline void dpaa2_sg_set_len(struct dpaa2_sg_entry *sg, u32 len)
7873 +{
7874 + sg->len = len;
7875 +}
7876 +
7877 +/**
7878 + * dpaa2_sg_get_offset() - Get the offset in SG entry
7879 + * @sg: the given scatter-gathering object.
7880 + *
7881 + * Return the offset.
7882 + */
7883 +static inline u16 dpaa2_sg_get_offset(const struct dpaa2_sg_entry *sg)
7884 +{
7885 + return (u16)(sg->bpid_offset >> 16) & 0x0FFF;
7886 +}
7887 +
7888 +/**
7889 + * dpaa2_sg_set_offset() - Set the offset in SG entry
7890 + * @sg: the given scatter-gathering object.
7891 + * @offset: the offset to be set.
7892 + */
7893 +static inline void dpaa2_sg_set_offset(struct dpaa2_sg_entry *sg,
7894 + u16 offset)
7895 +{
7896 + sg->bpid_offset &= 0xF000FFFF;
7897 + sg->bpid_offset |= (u32)offset << 16;
7898 +}
7899 +
7900 +/**
7901 + * dpaa2_sg_get_format() - Get the SG format in SG entry
7902 + * @sg: the given scatter-gathering object.
7903 + *
7904 + * Return the format.
7905 + */
7906 +static inline enum dpaa2_sg_format
7907 + dpaa2_sg_get_format(const struct dpaa2_sg_entry *sg)
7908 +{
7909 + return (enum dpaa2_sg_format)((sg->bpid_offset >> 28) & 0x3);
7910 +}
7911 +
7912 +/**
7913 + * dpaa2_sg_set_format() - Set the SG format in SG entry
7914 + * @sg: the given scatter-gathering object.
7915 + * @format: the format to be set.
7916 + */
7917 +static inline void dpaa2_sg_set_format(struct dpaa2_sg_entry *sg,
7918 + enum dpaa2_sg_format format)
7919 +{
7920 + sg->bpid_offset &= 0xCFFFFFFF;
7921 + sg->bpid_offset |= (u32)format << 28;
7922 +}
7923 +
7924 +/**
7925 + * dpaa2_sg_get_bpid() - Get the buffer pool id in SG entry
7926 + * @sg: the given scatter-gathering object.
7927 + *
7928 + * Return the bpid.
7929 + */
7930 +static inline u16 dpaa2_sg_get_bpid(const struct dpaa2_sg_entry *sg)
7931 +{
7932 + return (u16)(sg->bpid_offset & 0x3FFF);
7933 +}
7934 +
7935 +/**
7936 + * dpaa2_sg_set_bpid() - Set the buffer pool id in SG entry
7937 + * @sg: the given scatter-gathering object.
7938 + * @bpid: the bpid to be set.
7939 + */
7940 +static inline void dpaa2_sg_set_bpid(struct dpaa2_sg_entry *sg, u16 bpid)
7941 +{
7942 + sg->bpid_offset &= 0xFFFFC000;
7943 + sg->bpid_offset |= (u32)bpid;
7944 +}
7945 +
7946 +/**
7947 + * dpaa2_sg_is_final() - Check final bit in SG entry
7948 + * @sg: the given scatter-gathering object.
7949 + *
7950 + * Return bool.
7951 + */
7952 +static inline bool dpaa2_sg_is_final(const struct dpaa2_sg_entry *sg)
7953 +{
7954 + return !!(sg->bpid_offset >> 31);
7955 +}
7956 +
7957 +/**
7958 + * dpaa2_sg_set_final() - Set the final bit in SG entry
7959 + * @sg: the given scatter-gathering object.
7960 + * @final: the final boolean to be set.
7961 + */
7962 +static inline void dpaa2_sg_set_final(struct dpaa2_sg_entry *sg, bool final)
7963 +{
7964 + sg->bpid_offset &= 0x7FFFFFFF;
7965 + sg->bpid_offset |= (u32)final << 31;
7966 +}
7967 +
7968 +/* Endianness conversion helper functions
7969 + * The accelerator drivers which construct / read scatter gather entries
7970 + * need to call these in order to account for endianness mismatches between
7971 + * hardware and cpu
7972 + */
7973 +#ifdef __BIG_ENDIAN
7974 +/**
7975 + * dpaa2_sg_cpu_to_le() - convert scatter gather entry from native cpu
7976 + * format little endian format.
7977 + * @sg: the given scatter gather entry.
7978 + */
7979 +static inline void dpaa2_sg_cpu_to_le(struct dpaa2_sg_entry *sg)
7980 +{
7981 + uint32_t *p = (uint32_t *)sg;
7982 + int i;
7983 +
7984 + for (i = 0; i < sizeof(*sg) / sizeof(u32); i++)
7985 + cpu_to_le32s(p++);
7986 +}
7987 +
7988 +/**
7989 + * dpaa2_sg_le_to_cpu() - convert scatter gather entry from little endian
7990 + * format to native cpu format.
7991 + * @sg: the given scatter gather entry.
7992 + */
7993 +static inline void dpaa2_sg_le_to_cpu(struct dpaa2_sg_entry *sg)
7994 +{
7995 + uint32_t *p = (uint32_t *)sg;
7996 + int i;
7997 +
7998 + for (i = 0; i < sizeof(*sg) / sizeof(u32); i++)
7999 + le32_to_cpus(p++);
8000 +}
8001 +#else
8002 +#define dpaa2_sg_cpu_to_le(sg)
8003 +#define dpaa2_sg_le_to_cpu(sg)
8004 +#endif /* __BIG_ENDIAN */
8005 +
8006 +
8007 +/**
8008 + * struct dpaa2_fl_entry - structure for frame list entry.
8009 + * @addr_lo: the lower 32bit of address
8010 + * @addr_hi: the upper 32bit of address
8011 + * @len: the length in this sg entry.
8012 + * @bpid_offset: offset in the MS 16 bits, BPID in the LS 16 bits.
8013 + * @frc: frame context
8014 + * @ctrl: the 32bit control bits including dd, sc,... va, err.
8015 + * @flc_lo: the lower 32bit of flow context.
8016 + * @flc_hi: the upper 32bits of flow context.
8017 + *
8018 + * Frame List Entry (FLE)
8019 + * Identical to dpaa2_fd.simple layout, but some bits are different
8020 + */
8021 +struct dpaa2_fl_entry {
8022 + u32 addr_lo;
8023 + u32 addr_hi;
8024 + u32 len;
8025 + u32 bpid_offset;
8026 + u32 frc;
8027 + u32 ctrl;
8028 + u32 flc_lo;
8029 + u32 flc_hi;
8030 +};
8031 +
8032 +enum dpaa2_fl_format {
8033 + dpaa2_fl_single = 0,
8034 + dpaa2_fl_res,
8035 + dpaa2_fl_sg
8036 +};
8037 +
8038 +/**
8039 + * dpaa2_fl_get_addr() - Get address in the frame list entry
8040 + * @fle: the given frame list entry.
8041 + *
8042 + * Return address for the get function.
8043 + */
8044 +static inline dma_addr_t dpaa2_fl_get_addr(const struct dpaa2_fl_entry *fle)
8045 +{
8046 + return (dma_addr_t)((((uint64_t)fle->addr_hi) << 32) + fle->addr_lo);
8047 +}
8048 +
8049 +/**
8050 + * dpaa2_fl_set_addr() - Set the address in the frame list entry
8051 + * @fle: the given frame list entry.
8052 + * @addr: the address needs to be set.
8053 + *
8054 + */
8055 +static inline void dpaa2_fl_set_addr(struct dpaa2_fl_entry *fle,
8056 + dma_addr_t addr)
8057 +{
8058 + fle->addr_hi = upper_32_bits(addr);
8059 + fle->addr_lo = lower_32_bits(addr);
8060 +}
8061 +
8062 +/**
8063 + * dpaa2_fl_get_flc() - Get the flow context in the frame list entry
8064 + * @fle: the given frame list entry.
8065 + *
8066 + * Return flow context for the get function.
8067 + */
8068 +static inline dma_addr_t dpaa2_fl_get_flc(const struct dpaa2_fl_entry *fle)
8069 +{
8070 + return (dma_addr_t)((((uint64_t)fle->flc_hi) << 32) + fle->flc_lo);
8071 +}
8072 +
8073 +/**
8074 + * dpaa2_fl_set_flc() - Set the flow context in the frame list entry
8075 + * @fle: the given frame list entry.
8076 + * @flc_addr: the flow context address needs to be set.
8077 + *
8078 + */
8079 +static inline void dpaa2_fl_set_flc(struct dpaa2_fl_entry *fle,
8080 + dma_addr_t flc_addr)
8081 +{
8082 + fle->flc_hi = upper_32_bits(flc_addr);
8083 + fle->flc_lo = lower_32_bits(flc_addr);
8084 +}
8085 +
8086 +/**
8087 + * dpaa2_fl_get_len() - Get the length in the frame list entry
8088 + * @fle: the given frame list entry.
8089 + *
8090 + * Return length for the get function.
8091 + */
8092 +static inline u32 dpaa2_fl_get_len(const struct dpaa2_fl_entry *fle)
8093 +{
8094 + return fle->len;
8095 +}
8096 +
8097 +/**
8098 + * dpaa2_fl_set_len() - Set the length in the frame list entry
8099 + * @fle: the given frame list entry.
8100 + * @len: the length needs to be set.
8101 + *
8102 + */
8103 +static inline void dpaa2_fl_set_len(struct dpaa2_fl_entry *fle, u32 len)
8104 +{
8105 + fle->len = len;
8106 +}
8107 +
8108 +/**
8109 + * dpaa2_fl_get_offset() - Get/Set the offset in the frame list entry
8110 + * @fle: the given frame list entry.
8111 + *
8112 + * Return offset for the get function.
8113 + */
8114 +static inline uint16_t dpaa2_fl_get_offset(const struct dpaa2_fl_entry *fle)
8115 +{
8116 + return (uint16_t)(fle->bpid_offset >> 16) & 0x0FFF;
8117 +}
8118 +
8119 +/**
8120 + * dpaa2_fl_set_offset() - Set the offset in the frame list entry
8121 + * @fle: the given frame list entry.
8122 + * @offset: the offset needs to be set.
8123 + *
8124 + */
8125 +static inline void dpaa2_fl_set_offset(struct dpaa2_fl_entry *fle,
8126 + uint16_t offset)
8127 +{
8128 + fle->bpid_offset &= 0xF000FFFF;
8129 + fle->bpid_offset |= (u32)(offset & 0x0FFF) << 16;
8130 +}
8131 +
8132 +/**
8133 + * dpaa2_fl_get_format() - Get the format in the frame list entry
8134 + * @fle: the given frame list entry.
8135 + *
8136 + * Return frame list format for the get function.
8137 + */
8138 +static inline enum dpaa2_fl_format dpaa2_fl_get_format(
8139 + const struct dpaa2_fl_entry *fle)
8140 +{
8141 + return (enum dpaa2_fl_format)((fle->bpid_offset >> 28) & 0x3);
8142 +}
8143 +
8144 +/**
8145 + * dpaa2_fl_set_format() - Set the format in the frame list entry
8146 + * @fle: the given frame list entry.
8147 + * @format: the frame list format needs to be set.
8148 + *
8149 + */
8150 +static inline void dpaa2_fl_set_format(struct dpaa2_fl_entry *fle,
8151 + enum dpaa2_fl_format format)
8152 +{
8153 + fle->bpid_offset &= 0xCFFFFFFF;
8154 + fle->bpid_offset |= (u32)(format & 0x3) << 28;
8155 +}
8156 +
8157 +/**
8158 + * dpaa2_fl_get_bpid() - Get the buffer pool id in the frame list entry
8159 + * @fle: the given frame list entry.
8160 + *
8161 + * Return bpid for the get function.
8162 + */
8163 +static inline uint16_t dpaa2_fl_get_bpid(const struct dpaa2_fl_entry *fle)
8164 +{
8165 + return (uint16_t)(fle->bpid_offset & 0x3FFF);
8166 +}
8167 +
8168 +/**
8169 + * dpaa2_fl_set_bpid() - Set the buffer pool id in the frame list entry
8170 + * @fle: the given frame list entry.
8171 + * @bpid: the buffer pool id needs to be set.
8172 + *
8173 + */
8174 +static inline void dpaa2_fl_set_bpid(struct dpaa2_fl_entry *fle, uint16_t bpid)
8175 +{
8176 + fle->bpid_offset &= 0xFFFFC000;
8177 + fle->bpid_offset |= (u32)bpid;
8178 +}
8179 +
8180 +/** dpaa2_fl_is_final() - check the final bit is set or not in the frame list.
8181 + * @fle: the given frame list entry.
8182 + *
8183 + * Return final bit settting.
8184 + */
8185 +static inline bool dpaa2_fl_is_final(const struct dpaa2_fl_entry *fle)
8186 +{
8187 + return !!(fle->bpid_offset >> 31);
8188 +}
8189 +
8190 +/**
8191 + * dpaa2_fl_set_final() - Set the final bit in the frame list entry
8192 + * @fle: the given frame list entry.
8193 + * @final: the final bit needs to be set.
8194 + *
8195 + */
8196 +static inline void dpaa2_fl_set_final(struct dpaa2_fl_entry *fle, bool final)
8197 +{
8198 + fle->bpid_offset &= 0x7FFFFFFF;
8199 + fle->bpid_offset |= (u32)final << 31;
8200 +}
8201 +
8202 +/**
8203 + * struct dpaa2_dq - the qman result structure
8204 + * @dont_manipulate_directly: the 16 32bit data to represent the whole
8205 + * possible qman dequeue result.
8206 + *
8207 + * When frames are dequeued, the FDs show up inside "dequeue" result structures
8208 + * (if at all, not all dequeue results contain valid FDs). This structure type
8209 + * is intentionally defined without internal detail, and the only reason it
8210 + * isn't declared opaquely (without size) is to allow the user to provide
8211 + * suitably-sized (and aligned) memory for these entries.
8212 + */
8213 +struct dpaa2_dq {
8214 + uint32_t dont_manipulate_directly[16];
8215 +};
8216 +
8217 +/* Parsing frame dequeue results */
8218 +/* FQ empty */
8219 +#define DPAA2_DQ_STAT_FQEMPTY 0x80
8220 +/* FQ held active */
8221 +#define DPAA2_DQ_STAT_HELDACTIVE 0x40
8222 +/* FQ force eligible */
8223 +#define DPAA2_DQ_STAT_FORCEELIGIBLE 0x20
8224 +/* Valid frame */
8225 +#define DPAA2_DQ_STAT_VALIDFRAME 0x10
8226 +/* FQ ODP enable */
8227 +#define DPAA2_DQ_STAT_ODPVALID 0x04
8228 +/* Volatile dequeue */
8229 +#define DPAA2_DQ_STAT_VOLATILE 0x02
8230 +/* volatile dequeue command is expired */
8231 +#define DPAA2_DQ_STAT_EXPIRED 0x01
8232 +
8233 +/**
8234 + * dpaa2_dq_flags() - Get the stat field of dequeue response
8235 + * @dq: the dequeue result.
8236 + */
8237 +uint32_t dpaa2_dq_flags(const struct dpaa2_dq *dq);
8238 +
8239 +/**
8240 + * dpaa2_dq_is_pull() - Check whether the dq response is from a pull
8241 + * command.
8242 + * @dq: the dequeue result.
8243 + *
8244 + * Return 1 for volatile(pull) dequeue, 0 for static dequeue.
8245 + */
8246 +static inline int dpaa2_dq_is_pull(const struct dpaa2_dq *dq)
8247 +{
8248 + return (int)(dpaa2_dq_flags(dq) & DPAA2_DQ_STAT_VOLATILE);
8249 +}
8250 +
8251 +/**
8252 + * dpaa2_dq_is_pull_complete() - Check whether the pull command is completed.
8253 + * @dq: the dequeue result.
8254 + *
8255 + * Return boolean.
8256 + */
8257 +static inline int dpaa2_dq_is_pull_complete(
8258 + const struct dpaa2_dq *dq)
8259 +{
8260 + return (int)(dpaa2_dq_flags(dq) & DPAA2_DQ_STAT_EXPIRED);
8261 +}
8262 +
8263 +/**
8264 + * dpaa2_dq_seqnum() - Get the seqnum field in dequeue response
8265 + * seqnum is valid only if VALIDFRAME flag is TRUE
8266 + * @dq: the dequeue result.
8267 + *
8268 + * Return seqnum.
8269 + */
8270 +uint16_t dpaa2_dq_seqnum(const struct dpaa2_dq *dq);
8271 +
8272 +/**
8273 + * dpaa2_dq_odpid() - Get the seqnum field in dequeue response
8274 + * odpid is valid only if ODPVAILD flag is TRUE.
8275 + * @dq: the dequeue result.
8276 + *
8277 + * Return odpid.
8278 + */
8279 +uint16_t dpaa2_dq_odpid(const struct dpaa2_dq *dq);
8280 +
8281 +/**
8282 + * dpaa2_dq_fqid() - Get the fqid in dequeue response
8283 + * @dq: the dequeue result.
8284 + *
8285 + * Return fqid.
8286 + */
8287 +uint32_t dpaa2_dq_fqid(const struct dpaa2_dq *dq);
8288 +
8289 +/**
8290 + * dpaa2_dq_byte_count() - Get the byte count in dequeue response
8291 + * @dq: the dequeue result.
8292 + *
8293 + * Return the byte count remaining in the FQ.
8294 + */
8295 +uint32_t dpaa2_dq_byte_count(const struct dpaa2_dq *dq);
8296 +
8297 +/**
8298 + * dpaa2_dq_frame_count() - Get the frame count in dequeue response
8299 + * @dq: the dequeue result.
8300 + *
8301 + * Return the frame count remaining in the FQ.
8302 + */
8303 +uint32_t dpaa2_dq_frame_count(const struct dpaa2_dq *dq);
8304 +
8305 +/**
8306 + * dpaa2_dq_fd_ctx() - Get the frame queue context in dequeue response
8307 + * @dq: the dequeue result.
8308 + *
8309 + * Return the frame queue context.
8310 + */
8311 +uint64_t dpaa2_dq_fqd_ctx(const struct dpaa2_dq *dq);
8312 +
8313 +/**
8314 + * dpaa2_dq_fd() - Get the frame descriptor in dequeue response
8315 + * @dq: the dequeue result.
8316 + *
8317 + * Return the frame descriptor.
8318 + */
8319 +const struct dpaa2_fd *dpaa2_dq_fd(const struct dpaa2_dq *dq);
8320 +
8321 +#endif /* __FSL_DPAA2_FD_H */
8322 --- /dev/null
8323 +++ b/drivers/staging/fsl-mc/include/fsl_dpaa2_io.h
8324 @@ -0,0 +1,619 @@
8325 +/* Copyright 2014 Freescale Semiconductor Inc.
8326 + *
8327 + * Redistribution and use in source and binary forms, with or without
8328 + * modification, are permitted provided that the following conditions are met:
8329 + * * Redistributions of source code must retain the above copyright
8330 + * notice, this list of conditions and the following disclaimer.
8331 + * * Redistributions in binary form must reproduce the above copyright
8332 + * notice, this list of conditions and the following disclaimer in the
8333 + * documentation and/or other materials provided with the distribution.
8334 + * * Neither the name of Freescale Semiconductor nor the
8335 + * names of its contributors may be used to endorse or promote products
8336 + * derived from this software without specific prior written permission.
8337 + *
8338 + *
8339 + * ALTERNATIVELY, this software may be distributed under the terms of the
8340 + * GNU General Public License ("GPL") as published by the Free Software
8341 + * Foundation, either version 2 of that License or (at your option) any
8342 + * later version.
8343 + *
8344 + * THIS SOFTWARE IS PROVIDED BY Freescale Semiconductor ``AS IS'' AND ANY
8345 + * EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED
8346 + * WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE
8347 + * DISCLAIMED. IN NO EVENT SHALL Freescale Semiconductor BE LIABLE FOR ANY
8348 + * DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES
8349 + * (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES;
8350 + * LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND
8351 + * ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
8352 + * (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS
8353 + * SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
8354 + */
8355 +#ifndef __FSL_DPAA2_IO_H
8356 +#define __FSL_DPAA2_IO_H
8357 +
8358 +#include "fsl_dpaa2_fd.h"
8359 +
8360 +struct dpaa2_io;
8361 +struct dpaa2_io_store;
8362 +
8363 +/**
8364 + * DOC: DPIO Service Management
8365 + *
8366 + * The DPIO service provides APIs for users to interact with the datapath
8367 + * by enqueueing and dequeing frame descriptors.
8368 + *
8369 + * The following set of APIs can be used to enqueue and dequeue frames
8370 + * as well as producing notification callbacks when data is available
8371 + * for dequeue.
8372 + */
8373 +
8374 +/**
8375 + * struct dpaa2_io_desc - The DPIO descriptor.
8376 + * @receives_notifications: Use notificaton mode.
8377 + * @has_irq: use irq-based proessing.
8378 + * @will_poll: use poll processing.
8379 + * @has_8prio: set for channel with 8 priority WQs.
8380 + * @cpu: the cpu index that at least interrupt handlers will execute on.
8381 + * @stash_affinity: the stash affinity for this portal favour 'cpu'
8382 + * @regs_cena: the cache enabled regs.
8383 + * @regs_cinh: the cache inhibited regs.
8384 + * @dpio_id: The dpio index.
8385 + * @qman_version: the qman version
8386 + *
8387 + * Describe the attributes and features of the DPIO object.
8388 + */
8389 +struct dpaa2_io_desc {
8390 + /* non-zero iff the DPIO has a channel */
8391 + int receives_notifications;
8392 + /* non-zero if the DPIO portal interrupt is handled. If so, the
8393 + * caller/OS handles the interrupt and calls dpaa2_io_service_irq(). */
8394 + int has_irq;
8395 + /* non-zero if the caller/OS is prepared to called the
8396 + * dpaa2_io_service_poll() routine as part of its run-to-completion (or
8397 + * scheduling) loop. If so, the DPIO service may dynamically switch some
8398 + * of its processing between polling-based and irq-based. It is illegal
8399 + * combination to have (!has_irq && !will_poll). */
8400 + int will_poll;
8401 + /* ignored unless 'receives_notifications'. Non-zero iff the channel has
8402 + * 8 priority WQs, otherwise the channel has 2. */
8403 + int has_8prio;
8404 + /* the cpu index that at least interrupt handlers will execute on. And
8405 + * if 'stash_affinity' is non-zero, the cache targeted by stash
8406 + * transactions is affine to this cpu. */
8407 + int cpu;
8408 + /* non-zero if stash transactions for this portal favour 'cpu' over
8409 + * other CPUs. (Eg. zero if there's no stashing, or stashing is to
8410 + * shared cache.) */
8411 + int stash_affinity;
8412 + /* Caller-provided flags, determined by bus-scanning and/or creation of
8413 + * DPIO objects via MC commands. */
8414 + void *regs_cena;
8415 + void *regs_cinh;
8416 + int dpio_id;
8417 + uint32_t qman_version;
8418 +};
8419 +
8420 +/**
8421 + * dpaa2_io_create() - create a dpaa2_io object.
8422 + * @desc: the dpaa2_io descriptor
8423 + *
8424 + * Activates a "struct dpaa2_io" corresponding to the given config of an actual
8425 + * DPIO object. This handle can be used on it's own (like a one-portal "DPIO
8426 + * service") or later be added to a service-type "struct dpaa2_io" object. Note,
8427 + * the information required on 'cfg' is copied so the caller is free to do as
8428 + * they wish with the input parameter upon return.
8429 + *
8430 + * Return a valid dpaa2_io object for success, or NULL for failure.
8431 + */
8432 +struct dpaa2_io *dpaa2_io_create(const struct dpaa2_io_desc *desc);
8433 +
8434 +/**
8435 + * dpaa2_io_create_service() - Create an (initially empty) DPIO service.
8436 + *
8437 + * Return a valid dpaa2_io object for success, or NULL for failure.
8438 + */
8439 +struct dpaa2_io *dpaa2_io_create_service(void);
8440 +
8441 +/**
8442 + * dpaa2_io_default_service() - Use the driver's own global (and initially
8443 + * empty) DPIO service.
8444 + *
8445 + * This increments the reference count, so don't forget to use dpaa2_io_down()
8446 + * for each time this function is called.
8447 + *
8448 + * Return a valid dpaa2_io object for success, or NULL for failure.
8449 + */
8450 +struct dpaa2_io *dpaa2_io_default_service(void);
8451 +
8452 +/**
8453 + * dpaa2_io_down() - release the dpaa2_io object.
8454 + * @d: the dpaa2_io object to be released.
8455 + *
8456 + * The "struct dpaa2_io" type can represent an individual DPIO object (as
8457 + * described by "struct dpaa2_io_desc") or an instance of a "DPIO service",
8458 + * which can be used to group/encapsulate multiple DPIO objects. In all cases,
8459 + * each handle obtained should be released using this function.
8460 + */
8461 +void dpaa2_io_down(struct dpaa2_io *d);
8462 +
8463 +/**
8464 + * dpaa2_io_service_add() - Add the given DPIO object to the given DPIO service.
8465 + * @service: the given DPIO service.
8466 + * @obj: the given DPIO object.
8467 + *
8468 + * 'service' must have been created by dpaa2_io_create_service() and 'obj'
8469 + * must have been created by dpaa2_io_create(). This increments the reference
8470 + * count on the object that 'obj' refers to, so the user could call
8471 + * dpaa2_io_down(obj) after this and the object will persist within the service
8472 + * (and will be destroyed when the service is destroyed).
8473 + *
8474 + * Return 0 for success, or -EINVAL for failure.
8475 + */
8476 +int dpaa2_io_service_add(struct dpaa2_io *service, struct dpaa2_io *obj);
8477 +
8478 +/**
8479 + * dpaa2_io_get_descriptor() - Get the DPIO descriptor of the given DPIO object.
8480 + * @obj: the given DPIO object.
8481 + * @desc: the returned DPIO descriptor.
8482 + *
8483 + * This function will return failure if the given dpaa2_io struct represents a
8484 + * service rather than an individual DPIO object, otherwise it returns zero and
8485 + * the given 'cfg' structure is filled in.
8486 + *
8487 + * Return 0 for success, or -EINVAL for failure.
8488 + */
8489 +int dpaa2_io_get_descriptor(struct dpaa2_io *obj, struct dpaa2_io_desc *desc);
8490 +
8491 +/**
8492 + * dpaa2_io_poll() - Process any notifications and h/w-initiated events that
8493 + * are polling-driven.
8494 + * @obj: the given DPIO object.
8495 + *
8496 + * Obligatory for DPIO objects that have dpaa2_io_desc::will_poll non-zero.
8497 + *
8498 + * Return 0 for success, or -EINVAL for failure.
8499 + */
8500 +int dpaa2_io_poll(struct dpaa2_io *obj);
8501 +
8502 +/**
8503 + * dpaa2_io_irq() - Process any notifications and h/w-initiated events that are
8504 + * irq-driven.
8505 + * @obj: the given DPIO object.
8506 + *
8507 + * Obligatory for DPIO objects that have dpaa2_io_desc::has_irq non-zero.
8508 + *
8509 + * Return IRQ_HANDLED for success, or -EINVAL for failure.
8510 + */
8511 +int dpaa2_io_irq(struct dpaa2_io *obj);
8512 +
8513 +/**
8514 + * dpaa2_io_pause_poll() - Used to stop polling.
8515 + * @obj: the given DPIO object.
8516 + *
8517 + * If a polling application is going to stop polling for a period of time and
8518 + * supports interrupt processing, it can call this function to convert all
8519 + * processing to IRQ. (Eg. when sleeping.)
8520 + *
8521 + * Return -EINVAL.
8522 + */
8523 +int dpaa2_io_pause_poll(struct dpaa2_io *obj);
8524 +
8525 +/**
8526 + * dpaa2_io_resume_poll() - Resume polling
8527 + * @obj: the given DPIO object.
8528 + *
8529 + * Return -EINVAL.
8530 + */
8531 +int dpaa2_io_resume_poll(struct dpaa2_io *obj);
8532 +
8533 +/**
8534 + * dpaa2_io_service_notifications() - Get a mask of cpus that the DPIO service
8535 + * can receive notifications on.
8536 + * @s: the given DPIO object.
8537 + * @mask: the mask of cpus.
8538 + *
8539 + * Note that this is a run-time snapshot. If things like cpu-hotplug are
8540 + * supported in the target system, then an attempt to register notifications
8541 + * for a cpu that appears present in the given mask might fail if that cpu has
8542 + * gone offline in the mean time.
8543 + */
8544 +void dpaa2_io_service_notifications(struct dpaa2_io *s, cpumask_t *mask);
8545 +
8546 +/**
8547 + * dpaa2_io_service_stashing - Get a mask of cpus that the DPIO service has stash
8548 + * affinity to.
8549 + * @s: the given DPIO object.
8550 + * @mask: the mask of cpus.
8551 + */
8552 +void dpaa2_io_service_stashing(struct dpaa2_io *s, cpumask_t *mask);
8553 +
8554 +/**
8555 + * dpaa2_io_service_nonaffine() - Check the DPIO service's cpu affinity
8556 + * for stashing.
8557 + * @s: the given DPIO object.
8558 + *
8559 + * Return a boolean, whether or not the DPIO service has resources that have no
8560 + * particular cpu affinity for stashing. (Useful to know if you wish to operate
8561 + * on CPUs that the service has no affinity to, you would choose to use
8562 + * resources that are neutral, rather than affine to a different CPU.) Unlike
8563 + * other service-specific APIs, this one doesn't return an error if it is passed
8564 + * a non-service object. So don't do it.
8565 + */
8566 +int dpaa2_io_service_has_nonaffine(struct dpaa2_io *s);
8567 +
8568 +/*************************/
8569 +/* Notification handling */
8570 +/*************************/
8571 +
8572 +/**
8573 + * struct dpaa2_io_notification_ctx - The DPIO notification context structure.
8574 + * @cb: the callback to be invoked when the notification arrives.
8575 + * @is_cdan: Zero/FALSE for FQDAN, non-zero/TRUE for CDAN.
8576 + * @id: FQID or channel ID, needed for rearm.
8577 + * @desired_cpu: the cpu on which the notifications will show up.
8578 + * @actual_cpu: the cpu the notification actually shows up.
8579 + * @migration_cb: callback function used for migration.
8580 + * @dpio_id: the dpio index.
8581 + * @qman64: the 64-bit context value shows up in the FQDAN/CDAN.
8582 + * @node: the list node.
8583 + * @dpio_private: the dpio object internal to dpio_service.
8584 + *
8585 + * When a FQDAN/CDAN registration is made (eg. by DPNI/DPCON/DPAI code), a
8586 + * context of the following type is used. The caller can embed it within a
8587 + * larger structure in order to add state that is tracked along with the
8588 + * notification (this may be useful when callbacks are invoked that pass this
8589 + * notification context as a parameter).
8590 + */
8591 +struct dpaa2_io_notification_ctx {
8592 + void (*cb)(struct dpaa2_io_notification_ctx *);
8593 + int is_cdan;
8594 + uint32_t id;
8595 + /* This specifies which cpu the user wants notifications to show up on
8596 + * (ie. to execute 'cb'). If notification-handling on that cpu is not
8597 + * available at the time of notification registration, the registration
8598 + * will fail. */
8599 + int desired_cpu;
8600 + /* If the target platform supports cpu-hotplug or other features
8601 + * (related to power-management, one would expect) that can migrate IRQ
8602 + * handling of a given DPIO object, then this value will potentially be
8603 + * different to 'desired_cpu' at run-time. */
8604 + int actual_cpu;
8605 + /* And if migration does occur and this callback is non-NULL, it will
8606 + * be invoked prior to any futher notification callbacks executing on
8607 + * 'newcpu'. Note that 'oldcpu' is what 'actual_cpu' was prior to the
8608 + * migration, and 'newcpu' is what it is now. Both could conceivably be
8609 + * different to 'desired_cpu'. */
8610 + void (*migration_cb)(struct dpaa2_io_notification_ctx *,
8611 + int oldcpu, int newcpu);
8612 + /* These are returned from dpaa2_io_service_register().
8613 + * 'dpio_id' is the dpaa2_io_desc::dpio_id value of the DPIO object that
8614 + * has been selected by the service for receiving the notifications. The
8615 + * caller can use this value in the MC command that attaches the FQ (or
8616 + * channel) of their DPNI (or DPCON, respectively) to this DPIO for
8617 + * notification-generation.
8618 + * 'qman64' is the 64-bit context value that needs to be sent in the
8619 + * same MC command in order to be programmed into the FQ or channel -
8620 + * this is the 64-bit value that shows up in the FQDAN/CDAN messages to
8621 + * the DPIO object, and the DPIO service specifies this value back to
8622 + * the caller so that the notifications that show up will be
8623 + * comprensible/demux-able to the DPIO service. */
8624 + int dpio_id;
8625 + uint64_t qman64;
8626 + /* These fields are internal to the DPIO service once the context is
8627 + * registered. TBD: may require more internal state fields. */
8628 + struct list_head node;
8629 + void *dpio_private;
8630 +};
8631 +
8632 +/**
8633 + * dpaa2_io_service_register() - Prepare for servicing of FQDAN or CDAN
8634 + * notifications on the given DPIO service.
8635 + * @service: the given DPIO service.
8636 + * @ctx: the notification context.
8637 + *
8638 + * The MC command to attach the caller's DPNI/DPCON/DPAI device to a
8639 + * DPIO object is performed after this function is called. In that way, (a) the
8640 + * DPIO service is "ready" to handle a notification arrival (which might happen
8641 + * before the "attach" command to MC has returned control of execution back to
8642 + * the caller), and (b) the DPIO service can provide back to the caller the
8643 + * 'dpio_id' and 'qman64' parameters that it should pass along in the MC command
8644 + * in order for the DPNI/DPCON/DPAI resources to be configured to produce the
8645 + * right notification fields to the DPIO service.
8646 + *
8647 + * Return 0 for success, or -ENODEV for failure.
8648 + */
8649 +int dpaa2_io_service_register(struct dpaa2_io *service,
8650 + struct dpaa2_io_notification_ctx *ctx);
8651 +
8652 +/**
8653 + * dpaa2_io_service_deregister - The opposite of 'register'.
8654 + * @service: the given DPIO service.
8655 + * @ctx: the notification context.
8656 + *
8657 + * Note that 'register' should be called *before*
8658 + * making the MC call to attach the notification-producing device to the
8659 + * notification-handling DPIO service, the 'unregister' function should be
8660 + * called *after* making the MC call to detach the notification-producing
8661 + * device.
8662 + *
8663 + * Return 0 for success.
8664 + */
8665 +int dpaa2_io_service_deregister(struct dpaa2_io *service,
8666 + struct dpaa2_io_notification_ctx *ctx);
8667 +
8668 +/**
8669 + * dpaa2_io_service_rearm() - Rearm the notification for the given DPIO service.
8670 + * @service: the given DPIO service.
8671 + * @ctx: the notification context.
8672 + *
8673 + * Once a FQDAN/CDAN has been produced, the corresponding FQ/channel is
8674 + * considered "disarmed". Ie. the user can issue pull dequeue operations on that
8675 + * traffic source for as long as it likes. Eventually it may wish to "rearm"
8676 + * that source to allow it to produce another FQDAN/CDAN, that's what this
8677 + * function achieves.
8678 + *
8679 + * Return 0 for success, or -ENODEV if no service available, -EBUSY/-EIO for not
8680 + * being able to implement the rearm the notifiaton due to setting CDAN or
8681 + * scheduling fq.
8682 + */
8683 +int dpaa2_io_service_rearm(struct dpaa2_io *service,
8684 + struct dpaa2_io_notification_ctx *ctx);
8685 +
8686 +/**
8687 + * dpaa2_io_from_registration() - Get the DPIO object from the given notification
8688 + * context.
8689 + * @ctx: the given notifiation context.
8690 + * @ret: the returned DPIO object.
8691 + *
8692 + * Like 'dpaa2_io_service_get_persistent()' (see below), except that the
8693 + * returned handle is not selected based on a 'cpu' argument, but is the same
8694 + * DPIO object that the given notification context is registered against. The
8695 + * returned handle carries a reference count, so a corresponding dpaa2_io_down()
8696 + * would be required when the reference is no longer needed.
8697 + *
8698 + * Return 0 for success, or -EINVAL for failure.
8699 + */
8700 +int dpaa2_io_from_registration(struct dpaa2_io_notification_ctx *ctx,
8701 + struct dpaa2_io **ret);
8702 +
8703 +/**********************************/
8704 +/* General usage of DPIO services */
8705 +/**********************************/
8706 +
8707 +/**
8708 + * dpaa2_io_service_get_persistent() - Get the DPIO resource from the given
8709 + * notification context and cpu.
8710 + * @service: the DPIO service.
8711 + * @cpu: the cpu that the DPIO resource has stashing affinity to.
8712 + * @ret: the returned DPIO resource.
8713 + *
8714 + * The various DPIO interfaces can accept a "struct dpaa2_io" handle that refers
8715 + * to an individual DPIO object or to a whole service. In the latter case, an
8716 + * internal choice is made for each operation. This function supports the former
8717 + * case, by selecting an individual DPIO object *from* the service in order for
8718 + * it to be used multiple times to provide "persistence". The returned handle
8719 + * also carries a reference count, so a corresponding dpaa2_io_down() would be
8720 + * required when the reference is no longer needed. Note, a parameter of -1 for
8721 + * 'cpu' will select a DPIO resource that has no particular stashing affinity to
8722 + * any cpu (eg. one that stashes to platform cache).
8723 + *
8724 + * Return 0 for success, or -ENODEV for failure.
8725 + */
8726 +int dpaa2_io_service_get_persistent(struct dpaa2_io *service, int cpu,
8727 + struct dpaa2_io **ret);
8728 +
8729 +/*****************/
8730 +/* Pull dequeues */
8731 +/*****************/
8732 +
8733 +/**
8734 + * dpaa2_io_service_pull_fq() - pull dequeue functions from a fq.
8735 + * @d: the given DPIO service.
8736 + * @fqid: the given frame queue id.
8737 + * @s: the dpaa2_io_store object for the result.
8738 + *
8739 + * To support DCA/order-preservation, it will be necessary to support an
8740 + * alternative form, because they must ultimately dequeue to DQRR rather than a
8741 + * user-supplied dpaa2_io_store. Furthermore, those dequeue results will
8742 + * "complete" using a caller-provided callback (from DQRR processing) rather
8743 + * than the caller explicitly looking at their dpaa2_io_store for results. Eg.
8744 + * the alternative form will likely take a callback parameter rather than a
8745 + * store parameter. Ignoring it for now to keep the picture clearer.
8746 + *
8747 + * Return 0 for success, or error code for failure.
8748 + */
8749 +int dpaa2_io_service_pull_fq(struct dpaa2_io *d, uint32_t fqid,
8750 + struct dpaa2_io_store *s);
8751 +
8752 +/**
8753 + * dpaa2_io_service_pull_channel() - pull dequeue functions from a channel.
8754 + * @d: the given DPIO service.
8755 + * @channelid: the given channel id.
8756 + * @s: the dpaa2_io_store object for the result.
8757 + *
8758 + * To support DCA/order-preservation, it will be necessary to support an
8759 + * alternative form, because they must ultimately dequeue to DQRR rather than a
8760 + * user-supplied dpaa2_io_store. Furthermore, those dequeue results will
8761 + * "complete" using a caller-provided callback (from DQRR processing) rather
8762 + * than the caller explicitly looking at their dpaa2_io_store for results. Eg.
8763 + * the alternative form will likely take a callback parameter rather than a
8764 + * store parameter. Ignoring it for now to keep the picture clearer.
8765 + *
8766 + * Return 0 for success, or error code for failure.
8767 + */
8768 +int dpaa2_io_service_pull_channel(struct dpaa2_io *d, uint32_t channelid,
8769 + struct dpaa2_io_store *s);
8770 +
8771 +/************/
8772 +/* Enqueues */
8773 +/************/
8774 +
8775 +/**
8776 + * dpaa2_io_service_enqueue_fq() - Enqueue a frame to a frame queue.
8777 + * @d: the given DPIO service.
8778 + * @fqid: the given frame queue id.
8779 + * @fd: the frame descriptor which is enqueued.
8780 + *
8781 + * This definition bypasses some features that are not expected to be priority-1
8782 + * features, and may not be needed at all via current assumptions (QBMan's
8783 + * feature set is wider than the MC object model is intendeding to support,
8784 + * initially at least). Plus, keeping them out (for now) keeps the API view
8785 + * simpler. Missing features are;
8786 + * - enqueue confirmation (results DMA'd back to the user)
8787 + * - ORP
8788 + * - DCA/order-preservation (see note in "pull dequeues")
8789 + * - enqueue consumption interrupts
8790 + *
8791 + * Return 0 for successful enqueue, or -EBUSY if the enqueue ring is not ready,
8792 + * or -ENODEV if there is no dpio service.
8793 + */
8794 +int dpaa2_io_service_enqueue_fq(struct dpaa2_io *d,
8795 + uint32_t fqid,
8796 + const struct dpaa2_fd *fd);
8797 +
8798 +/**
8799 + * dpaa2_io_service_enqueue_qd() - Enqueue a frame to a QD.
8800 + * @d: the given DPIO service.
8801 + * @qdid: the given queuing destination id.
8802 + * @prio: the given queuing priority.
8803 + * @qdbin: the given queuing destination bin.
8804 + * @fd: the frame descriptor which is enqueued.
8805 + *
8806 + * This definition bypasses some features that are not expected to be priority-1
8807 + * features, and may not be needed at all via current assumptions (QBMan's
8808 + * feature set is wider than the MC object model is intendeding to support,
8809 + * initially at least). Plus, keeping them out (for now) keeps the API view
8810 + * simpler. Missing features are;
8811 + * - enqueue confirmation (results DMA'd back to the user)
8812 + * - ORP
8813 + * - DCA/order-preservation (see note in "pull dequeues")
8814 + * - enqueue consumption interrupts
8815 + *
8816 + * Return 0 for successful enqueue, or -EBUSY if the enqueue ring is not ready,
8817 + * or -ENODEV if there is no dpio service.
8818 + */
8819 +int dpaa2_io_service_enqueue_qd(struct dpaa2_io *d,
8820 + uint32_t qdid, uint8_t prio, uint16_t qdbin,
8821 + const struct dpaa2_fd *fd);
8822 +
8823 +/*******************/
8824 +/* Buffer handling */
8825 +/*******************/
8826 +
8827 +/**
8828 + * dpaa2_io_service_release() - Release buffers to a buffer pool.
8829 + * @d: the given DPIO object.
8830 + * @bpid: the buffer pool id.
8831 + * @buffers: the buffers to be released.
8832 + * @num_buffers: the number of the buffers to be released.
8833 + *
8834 + * Return 0 for success, and negative error code for failure.
8835 + */
8836 +int dpaa2_io_service_release(struct dpaa2_io *d,
8837 + uint32_t bpid,
8838 + const uint64_t *buffers,
8839 + unsigned int num_buffers);
8840 +
8841 +/**
8842 + * dpaa2_io_service_acquire() - Acquire buffers from a buffer pool.
8843 + * @d: the given DPIO object.
8844 + * @bpid: the buffer pool id.
8845 + * @buffers: the buffer addresses for acquired buffers.
8846 + * @num_buffers: the expected number of the buffers to acquire.
8847 + *
8848 + * Return a negative error code if the command failed, otherwise it returns
8849 + * the number of buffers acquired, which may be less than the number requested.
8850 + * Eg. if the buffer pool is empty, this will return zero.
8851 + */
8852 +int dpaa2_io_service_acquire(struct dpaa2_io *d,
8853 + uint32_t bpid,
8854 + uint64_t *buffers,
8855 + unsigned int num_buffers);
8856 +
8857 +/***************/
8858 +/* DPIO stores */
8859 +/***************/
8860 +
8861 +/* These are reusable memory blocks for retrieving dequeue results into, and to
8862 + * assist with parsing those results once they show up. They also hide the
8863 + * details of how to use "tokens" to make detection of DMA results possible (ie.
8864 + * comparing memory before the DMA and after it) while minimising the needless
8865 + * clearing/rewriting of those memory locations between uses.
8866 + */
8867 +
8868 +/**
8869 + * dpaa2_io_store_create() - Create the dma memory storage for dequeue
8870 + * result.
8871 + * @max_frames: the maximum number of dequeued result for frames, must be <= 16.
8872 + * @dev: the device to allow mapping/unmapping the DMAable region.
8873 + *
8874 + * Constructor - max_frames must be <= 16. The user provides the
8875 + * device struct to allow mapping/unmapping of the DMAable region. Area for
8876 + * storage will be allocated during create. The size of this storage is
8877 + * "max_frames*sizeof(struct dpaa2_dq)". The 'dpaa2_io_store' returned is a
8878 + * wrapper structure allocated within the DPIO code, which owns and manages
8879 + * allocated store.
8880 + *
8881 + * Return dpaa2_io_store struct for successfuly created storage memory, or NULL
8882 + * if not getting the stroage for dequeue result in create API.
8883 + */
8884 +struct dpaa2_io_store *dpaa2_io_store_create(unsigned int max_frames,
8885 + struct device *dev);
8886 +
8887 +/**
8888 + * dpaa2_io_store_destroy() - Destroy the dma memory storage for dequeue
8889 + * result.
8890 + * @s: the storage memory to be destroyed.
8891 + *
8892 + * Frees to specified storage memory.
8893 + */
8894 +void dpaa2_io_store_destroy(struct dpaa2_io_store *s);
8895 +
8896 +/**
8897 + * dpaa2_io_store_next() - Determine when the next dequeue result is available.
8898 + * @s: the dpaa2_io_store object.
8899 + * @is_last: indicate whether this is the last frame in the pull command.
8900 + *
8901 + * Once dpaa2_io_store has been passed to a function that performs dequeues to
8902 + * it, like dpaa2_ni_rx(), this function can be used to determine when the next
8903 + * frame result is available. Once this function returns non-NULL, a subsequent
8904 + * call to it will try to find the *next* dequeue result.
8905 + *
8906 + * Note that if a pull-dequeue has a null result because the target FQ/channel
8907 + * was empty, then this function will return NULL rather than expect the caller
8908 + * to always check for this on his own side. As such, "is_last" can be used to
8909 + * differentiate between "end-of-empty-dequeue" and "still-waiting".
8910 + *
8911 + * Return dequeue result for a valid dequeue result, or NULL for empty dequeue.
8912 + */
8913 +struct dpaa2_dq *dpaa2_io_store_next(struct dpaa2_io_store *s, int *is_last);
8914 +
8915 +#ifdef CONFIG_FSL_QBMAN_DEBUG
8916 +/**
8917 + * dpaa2_io_query_fq_count() - Get the frame and byte count for a given fq.
8918 + * @d: the given DPIO object.
8919 + * @fqid: the id of frame queue to be queried.
8920 + * @fcnt: the queried frame count.
8921 + * @bcnt: the queried byte count.
8922 + *
8923 + * Knowing the FQ count at run-time can be useful in debugging situations.
8924 + * The instantaneous frame- and byte-count are hereby returned.
8925 + *
8926 + * Return 0 for a successful query, and negative error code if query fails.
8927 + */
8928 +int dpaa2_io_query_fq_count(struct dpaa2_io *d, uint32_t fqid,
8929 + uint32_t *fcnt, uint32_t *bcnt);
8930 +
8931 +/**
8932 + * dpaa2_io_query_bp_count() - Query the number of buffers currenty in a
8933 + * buffer pool.
8934 + * @d: the given DPIO object.
8935 + * @bpid: the index of buffer pool to be queried.
8936 + * @num: the queried number of buffers in the buffer pool.
8937 + *
8938 + * Return 0 for a sucessful query, and negative error code if query fails.
8939 + */
8940 +int dpaa2_io_query_bp_count(struct dpaa2_io *d, uint32_t bpid,
8941 + uint32_t *num);
8942 +#endif
8943 +#endif /* __FSL_DPAA2_IO_H */