apm821xx: dw_dmac: backport fixes and cleanups from 4.7
[openwrt/openwrt.git] / target / linux / apm821xx / patches-4.4 / 012-dmaengine-Add-transfer-termination-synchronization-s.patch
1 From b36f09c3c441a6e59eab9315032e7d546571de3f Mon Sep 17 00:00:00 2001
2 From: Lars-Peter Clausen <lars@metafoo.de>
3 Date: Tue, 20 Oct 2015 11:46:28 +0200
4 Subject: [PATCH] dmaengine: Add transfer termination synchronization support
5
6 The DMAengine API has a long standing race condition that is inherent to
7 the API itself. Calling dmaengine_terminate_all() is supposed to stop and
8 abort any pending or active transfers that have previously been submitted.
9 Unfortunately it is possible that this operation races against a currently
10 running (or with some drivers also scheduled) completion callback.
11
12 Since the API allows dmaengine_terminate_all() to be called from atomic
13 context as well as from within a completion callback it is not possible to
14 synchronize to the execution of the completion callback from within
15 dmaengine_terminate_all() itself.
16
17 This means that a user of the DMAengine API does not know when it is safe
18 to free resources used in the completion callback, which can result in a
19 use-after-free race condition.
20
21 This patch addresses the issue by introducing an explicit synchronization
22 primitive to the DMAengine API called dmaengine_synchronize().
23
24 The existing dmaengine_terminate_all() is deprecated in favor of
25 dmaengine_terminate_sync() and dmaengine_terminate_async(). The former
26 aborts all pending and active transfers and synchronizes to the current
27 context, meaning it will wait until all running completion callbacks have
28 finished. This means it is only possible to call this function from
29 non-atomic context. The later function does not synchronize, but can still
30 be used in atomic context or from within a complete callback. It has to be
31 followed up by dmaengine_synchronize() before a client can free the
32 resources used in a completion callback.
33
34 In addition to this the semantics of the device_terminate_all() callback
35 are slightly relaxed by this patch. It is now OK for a driver to only
36 schedule the termination of the active transfer, but does not necessarily
37 have to wait until the DMA controller has completely stopped. The driver
38 must ensure though that the controller has stopped and no longer accesses
39 any memory when the device_synchronize() callback returns.
40
41 This was in part done since most drivers do not pay attention to this
42 anyway at the moment and to emphasize that this needs to be done when the
43 device_synchronize() callback is implemented. But it also helps with
44 implementing support for devices where stopping the controller can require
45 operations that may sleep.
46
47 Signed-off-by: Lars-Peter Clausen <lars@metafoo.de>
48 Signed-off-by: Vinod Koul <vinod.koul@intel.com>
49 ---
50 Documentation/dmaengine/client.txt | 38 ++++++++++++++-
51 Documentation/dmaengine/provider.txt | 20 +++++++-
52 drivers/dma/dmaengine.c | 5 +-
53 include/linux/dmaengine.h | 90 ++++++++++++++++++++++++++++++++++++
54 4 files changed, 148 insertions(+), 5 deletions(-)
55
56 diff --git a/Documentation/dmaengine/client.txt b/Documentation/dmaengine/client.txt
57 index 11fb87f..d9f9f46 100644
58 --- a/Documentation/dmaengine/client.txt
59 +++ b/Documentation/dmaengine/client.txt
60 @@ -128,7 +128,7 @@ The slave DMA usage consists of following steps:
61 transaction.
62
63 For cyclic DMA, a callback function may wish to terminate the
64 - DMA via dmaengine_terminate_all().
65 + DMA via dmaengine_terminate_async().
66
67 Therefore, it is important that DMA engine drivers drop any
68 locks before calling the callback function which may cause a
69 @@ -166,12 +166,29 @@ The slave DMA usage consists of following steps:
70
71 Further APIs:
72
73 -1. int dmaengine_terminate_all(struct dma_chan *chan)
74 +1. int dmaengine_terminate_sync(struct dma_chan *chan)
75 + int dmaengine_terminate_async(struct dma_chan *chan)
76 + int dmaengine_terminate_all(struct dma_chan *chan) /* DEPRECATED */
77
78 This causes all activity for the DMA channel to be stopped, and may
79 discard data in the DMA FIFO which hasn't been fully transferred.
80 No callback functions will be called for any incomplete transfers.
81
82 + Two variants of this function are available.
83 +
84 + dmaengine_terminate_async() might not wait until the DMA has been fully
85 + stopped or until any running complete callbacks have finished. But it is
86 + possible to call dmaengine_terminate_async() from atomic context or from
87 + within a complete callback. dmaengine_synchronize() must be called before it
88 + is safe to free the memory accessed by the DMA transfer or free resources
89 + accessed from within the complete callback.
90 +
91 + dmaengine_terminate_sync() will wait for the transfer and any running
92 + complete callbacks to finish before it returns. But the function must not be
93 + called from atomic context or from within a complete callback.
94 +
95 + dmaengine_terminate_all() is deprecated and should not be used in new code.
96 +
97 2. int dmaengine_pause(struct dma_chan *chan)
98
99 This pauses activity on the DMA channel without data loss.
100 @@ -197,3 +214,20 @@ Further APIs:
101 a running DMA channel. It is recommended that DMA engine users
102 pause or stop (via dmaengine_terminate_all()) the channel before
103 using this API.
104 +
105 +5. void dmaengine_synchronize(struct dma_chan *chan)
106 +
107 + Synchronize the termination of the DMA channel to the current context.
108 +
109 + This function should be used after dmaengine_terminate_async() to synchronize
110 + the termination of the DMA channel to the current context. The function will
111 + wait for the transfer and any running complete callbacks to finish before it
112 + returns.
113 +
114 + If dmaengine_terminate_async() is used to stop the DMA channel this function
115 + must be called before it is safe to free memory accessed by previously
116 + submitted descriptors or to free any resources accessed within the complete
117 + callback of previously submitted descriptors.
118 +
119 + The behavior of this function is undefined if dma_async_issue_pending() has
120 + been called between dmaengine_terminate_async() and this function.
121 diff --git a/Documentation/dmaengine/provider.txt b/Documentation/dmaengine/provider.txt
122 index 67d4ce4..122b7f4 100644
123 --- a/Documentation/dmaengine/provider.txt
124 +++ b/Documentation/dmaengine/provider.txt
125 @@ -327,8 +327,24 @@ supported.
126
127 * device_terminate_all
128 - Aborts all the pending and ongoing transfers on the channel
129 - - This command should operate synchronously on the channel,
130 - terminating right away all the channels
131 + - For aborted transfers the complete callback should not be called
132 + - Can be called from atomic context or from within a complete
133 + callback of a descriptor. Must not sleep. Drivers must be able
134 + to handle this correctly.
135 + - Termination may be asynchronous. The driver does not have to
136 + wait until the currently active transfer has completely stopped.
137 + See device_synchronize.
138 +
139 + * device_synchronize
140 + - Must synchronize the termination of a channel to the current
141 + context.
142 + - Must make sure that memory for previously submitted
143 + descriptors is no longer accessed by the DMA controller.
144 + - Must make sure that all complete callbacks for previously
145 + submitted descriptors have finished running and none are
146 + scheduled to run.
147 + - May sleep.
148 +
149
150 Misc notes (stuff that should be documented, but don't really know
151 where to put them)
152 diff --git a/drivers/dma/dmaengine.c b/drivers/dma/dmaengine.c
153 index 3ecec14..d6fc82e 100644
154 --- a/drivers/dma/dmaengine.c
155 +++ b/drivers/dma/dmaengine.c
156 @@ -265,8 +265,11 @@ static void dma_chan_put(struct dma_chan *chan)
157 module_put(dma_chan_to_owner(chan));
158
159 /* This channel is not in use anymore, free it */
160 - if (!chan->client_count && chan->device->device_free_chan_resources)
161 + if (!chan->client_count && chan->device->device_free_chan_resources) {
162 + /* Make sure all operations have completed */
163 + dmaengine_synchronize(chan);
164 chan->device->device_free_chan_resources(chan);
165 + }
166
167 /* If the channel is used via a DMA request router, free the mapping */
168 if (chan->router && chan->router->route_free) {
169 diff --git a/include/linux/dmaengine.h b/include/linux/dmaengine.h
170 index c47c68e..4662d9a 100644
171 --- a/include/linux/dmaengine.h
172 +++ b/include/linux/dmaengine.h
173 @@ -654,6 +654,8 @@ enum dmaengine_alignment {
174 * paused. Returns 0 or an error code
175 * @device_terminate_all: Aborts all transfers on a channel. Returns 0
176 * or an error code
177 + * @device_synchronize: Synchronizes the termination of a transfers to the
178 + * current context.
179 * @device_tx_status: poll for transaction completion, the optional
180 * txstate parameter can be supplied with a pointer to get a
181 * struct with auxiliary transfer status information, otherwise the call
182 @@ -737,6 +739,7 @@ struct dma_device {
183 int (*device_pause)(struct dma_chan *chan);
184 int (*device_resume)(struct dma_chan *chan);
185 int (*device_terminate_all)(struct dma_chan *chan);
186 + void (*device_synchronize)(struct dma_chan *chan);
187
188 enum dma_status (*device_tx_status)(struct dma_chan *chan,
189 dma_cookie_t cookie,
190 @@ -828,6 +831,13 @@ static inline struct dma_async_tx_descriptor *dmaengine_prep_dma_sg(
191 src_sg, src_nents, flags);
192 }
193
194 +/**
195 + * dmaengine_terminate_all() - Terminate all active DMA transfers
196 + * @chan: The channel for which to terminate the transfers
197 + *
198 + * This function is DEPRECATED use either dmaengine_terminate_sync() or
199 + * dmaengine_terminate_async() instead.
200 + */
201 static inline int dmaengine_terminate_all(struct dma_chan *chan)
202 {
203 if (chan->device->device_terminate_all)
204 @@ -836,6 +846,86 @@ static inline int dmaengine_terminate_all(struct dma_chan *chan)
205 return -ENOSYS;
206 }
207
208 +/**
209 + * dmaengine_terminate_async() - Terminate all active DMA transfers
210 + * @chan: The channel for which to terminate the transfers
211 + *
212 + * Calling this function will terminate all active and pending descriptors
213 + * that have previously been submitted to the channel. It is not guaranteed
214 + * though that the transfer for the active descriptor has stopped when the
215 + * function returns. Furthermore it is possible the complete callback of a
216 + * submitted transfer is still running when this function returns.
217 + *
218 + * dmaengine_synchronize() needs to be called before it is safe to free
219 + * any memory that is accessed by previously submitted descriptors or before
220 + * freeing any resources accessed from within the completion callback of any
221 + * perviously submitted descriptors.
222 + *
223 + * This function can be called from atomic context as well as from within a
224 + * complete callback of a descriptor submitted on the same channel.
225 + *
226 + * If none of the two conditions above apply consider using
227 + * dmaengine_terminate_sync() instead.
228 + */
229 +static inline int dmaengine_terminate_async(struct dma_chan *chan)
230 +{
231 + if (chan->device->device_terminate_all)
232 + return chan->device->device_terminate_all(chan);
233 +
234 + return -EINVAL;
235 +}
236 +
237 +/**
238 + * dmaengine_synchronize() - Synchronize DMA channel termination
239 + * @chan: The channel to synchronize
240 + *
241 + * Synchronizes to the DMA channel termination to the current context. When this
242 + * function returns it is guaranteed that all transfers for previously issued
243 + * descriptors have stopped and and it is safe to free the memory assoicated
244 + * with them. Furthermore it is guaranteed that all complete callback functions
245 + * for a previously submitted descriptor have finished running and it is safe to
246 + * free resources accessed from within the complete callbacks.
247 + *
248 + * The behavior of this function is undefined if dma_async_issue_pending() has
249 + * been called between dmaengine_terminate_async() and this function.
250 + *
251 + * This function must only be called from non-atomic context and must not be
252 + * called from within a complete callback of a descriptor submitted on the same
253 + * channel.
254 + */
255 +static inline void dmaengine_synchronize(struct dma_chan *chan)
256 +{
257 + if (chan->device->device_synchronize)
258 + chan->device->device_synchronize(chan);
259 +}
260 +
261 +/**
262 + * dmaengine_terminate_sync() - Terminate all active DMA transfers
263 + * @chan: The channel for which to terminate the transfers
264 + *
265 + * Calling this function will terminate all active and pending transfers
266 + * that have previously been submitted to the channel. It is similar to
267 + * dmaengine_terminate_async() but guarantees that the DMA transfer has actually
268 + * stopped and that all complete callbacks have finished running when the
269 + * function returns.
270 + *
271 + * This function must only be called from non-atomic context and must not be
272 + * called from within a complete callback of a descriptor submitted on the same
273 + * channel.
274 + */
275 +static inline int dmaengine_terminate_sync(struct dma_chan *chan)
276 +{
277 + int ret;
278 +
279 + ret = dmaengine_terminate_async(chan);
280 + if (ret)
281 + return ret;
282 +
283 + dmaengine_synchronize(chan);
284 +
285 + return 0;
286 +}
287 +
288 static inline int dmaengine_pause(struct dma_chan *chan)
289 {
290 if (chan->device->device_pause)
291 --
292 2.8.1
293