Révision | 97e5918d9359de140ebd9690fd6d103275121e68 (tree) |
---|---|
l'heure | 2022-09-14 00:42:28 |
Auteur | Albert Mietus < albert AT mietus DOT nl > |
Commiter | Albert Mietus < albert AT mietus DOT nl > |
AsIs: a bit of grammerly
@@ -10,14 +10,14 @@ | ||
10 | 10 | :category: Castle DesignStudy |
11 | 11 | :tags: Castle, Concurrency, DRAFT§ |
12 | 12 | |
13 | - Sooner as we may realize even embedded systems will have many, many cores; as I described in | |
13 | + Sooner as we realize, even embedded systems will have piles & heaps of cores, as I described in | |
14 | 14 | “:ref:`BusyCores`”. Castle should make it easy to write code for all of them: not to keep them busy, but to maximize |
15 | - speed up [useCase: :need:`U_ManyCore`]. There I also showed that threads_ do not scale well for CPU-bound (embedded) | |
15 | + speed up [useCase: :need:`U_ManyCore`]. I also showed that threads_ do not scale well for CPU-bound (embedded) | |
16 | 16 | systems. Last, I introduced some (more) concurrency abstractions. Some are great, but they often do not fit |
17 | 17 | nicely in existing languages. |
18 | 18 | |
19 | - Still, as Castle is a new language we have the opportunity to select such a concept and incorporate it into the | |
20 | - language ... | |
19 | + Still, as Castle is a new language, we have the opportunity to select such a concept and incorporate it into the | |
20 | + language. | |
21 | 21 | |BR| |
22 | 22 | In this blog, we explore a bit of theory. I will focus on semantics and the possibilities to implement them |
23 | 23 | efficiently. The exact syntax will come later. |
@@ -25,9 +25,9 @@ | ||
25 | 25 | Basic terminology |
26 | 26 | ================= |
27 | 27 | |
28 | -There are many theories available and some more practical expertise but they hardly share a common vocabulary. | |
29 | -For that reason, let’s describe some basic terms, that will be used in these blogs. As always, we use Wikipedia as common | |
30 | -ground and add links for a deep dive. | |
28 | +Many theories are available, as is some more practical expertise, and hardly non of them share a common vocabulary. For | |
29 | +that reason,I first describe some basic terms, and how they are used in these blogs. As always, we use Wikipedia | |
30 | +as common ground and add links for a deep dive. | |
31 | 31 | |BR| |
32 | 32 | Again, we use ‘task’ as the most generic term for work-to-be-executed; that can be (in) a process, (on) a thread, (by) a |
33 | 33 | computer, etc. |
@@ -88,10 +88,10 @@ | ||
88 | 88 | |BR| |
89 | 89 | There are two main approaches: shared-data of message-passing; we will introduce them below. |
90 | 90 | |
91 | -Communication takes time, especially *wall time* [#wall-time]_ (or clock time) and may slow down computing. Therefore | |
91 | +Communication takes time, especially *wall time* [#wall-time]_ (or clock time), and may slow down computing. Therefore | |
92 | 92 | communication has to be efficient. This is an arduous problem and becomes harder when we have more communication, more |
93 | -concurrency, more parallelism, and/or those tasks are short(er)living. Or better: it depends on the ratio between the | |
94 | -communication-time and the time-between-two-communications. | |
93 | +concurrency, more parallelism, and/or those tasks are short living. Or better: it depends on the ratio of | |
94 | +time-between-communications and the time-between-two-communications. | |
95 | 95 | |
96 | 96 | |
97 | 97 | Shared Memory |
@@ -141,58 +141,60 @@ | ||
141 | 141 | Messaging Aspects |
142 | 142 | ================= |
143 | 143 | |
144 | -There are many variant on messaging, mostly combinations some fundamental aspects. Let mentions some basic ones. | |
145 | -|BR| In :ref:`MPA-examples` some existing messaging passing systems are classified in those therms, for those that do | |
146 | -prefer a more practical characterisation. | |
144 | +There are many variants of messaging, mostly combinations of some fundamental aspects. Let mentions some basic ones. | |
145 | +|BR| | |
146 | +In :ref:`MPA-examples` some existing messaging passing systems are classified in those terms, for those that do prefer | |
147 | +a more practical characterisation. | |
147 | 148 | |
148 | 149 | |
149 | 150 | |
150 | 151 | (A)Synchronous |
151 | 152 | -------------- |
152 | 153 | |
153 | -**Synchronous** messages resembles normal function-calls. Typically a “question” is send, the call awaits the | |
154 | -answer-messages, and that answer is returned. This can be seen as a layer on top of the more fundamental send/receive | |
155 | -calls. An famous example is RPC_: the Remote Procedure Call. | |
154 | +**Synchronous** messages resemble normal function calls. Typically a “question” is sent, the call awaits the | |
155 | +answer message, and that answer is returned. This can be seen as a layer on top of the more fundamental send/receive | |
156 | +calls. A famous example is RPC_: the Remote Procedure Call. | |
156 | 157 | |
157 | -**Asynchronous** messages are more basic: a task send a messages (to somebody else) and continues. That message can be | |
158 | -“data”, an “event:, a “commands” or a “query”. Only in the latter case some responds is essental. With async messages, | |
159 | -there is no desire that to get the answer immediately. | |
158 | +**Asynchronous** messages are more basic: a task sends a message and continues. That message can be “data”, an “event”, | |
159 | +a “command”, or a “query”. Only in the latter case, some response is essential. With async messages, there is no desire | |
160 | +to get the answer immediately. | |
160 | 161 | |
161 | 162 | As an example: A task can send many queries (and/or other messages) to multiple destinations at once, then go into |
162 | -*listen-mode*, and handle the replies in the order the are received (which can be different then send-order). Typically, | |
163 | -this speeds-up (wall) time, and is only possible with async messages. Notice: the return messages need to carry an “ID” | |
163 | +*listen-mode*, and handle the replies in the order they are received (which can be different than as sent). Typically, | |
164 | +this speeds up (wall) time, and is only possible with async messages. Notice: the return messages need to carry an “ID” | |
164 | 165 | of the initial messages to keep track -- often that is the query itself. |
165 | 166 | |
166 | 167 | |
167 | 168 | (Un)Buffered |
168 | 169 | ------------ |
169 | 170 | |
170 | -Despide it’s is not truly a characteristic of the messages itself, messages can be *buffered*, or not. It is about | |
171 | -piping, transporting the message: can this “connection” (see below) *contain/save/store* messages? When there is no | |
172 | -storage at all the writer and reader needs to rendezvous: send and receive at the same (wall) time. | |
171 | +Despite it is not truly a characteristic of the message itself, messages can be *buffered*, or not. It is about | |
172 | +the plumbing to transport the message: can this “connection” (see below) *contain/save/store* messages? When there is no | |
173 | +storage at all the writer and reader need to rendezvous: send and receive at the same (wall) time. | |
173 | 174 | |BR| |
174 | -With a buffer (often depicted as a queue) multiple messages may be sent, before they need to be picked-up by the | |
175 | +With a buffer (often depicted as a queue) multiple messages may be sent before they need to be picked up by the | |
175 | 176 | receiver; the number depends on the size of the buffer. |
176 | 177 | |
177 | -Note: this is always asymmetric; messages need to be send before the can be read. | |
178 | +Note: this is always asymmetric; messages need to be sent before they can be read. | |
178 | 179 | |
179 | 180 | Connected Channels (or not) |
180 | 181 | --------------------------- |
181 | 182 | |
182 | -Messages can be send over (pre-) *connected channels* or to freely addressable end-points. Some people use the term “connection | |
183 | -oriented” for those connected-channels, others use the term “channel” more generic and for any medium that is | |
184 | -transporting messages. I try to use “*connected-channel”* when is a *pre connected* channel. | |
183 | +Messages can be sent over (pre-) *connected channels* or to freely addressable end-points. Some people use the term | |
184 | +“connection-oriented” for those connected channels, others use the term “channel” more generic and for any medium that | |
185 | +is | |
186 | +transporting messages. I try to use “*connected-channel”* when is a *pre-connected* channel. | |
185 | 187 | |
186 | -When using connected-channels, one writes the message to the channel; there is no need to add the receiver to the | |
188 | +When using connected channels, one writes the message to the channel; there is no need to add the receiver to the | |
187 | 189 | message. Also when reading, the sender is clear. |
188 | 190 | |BR| |
189 | -Clearly, the channel has to be set-up before it can be used. | |
191 | +Clearly, the channel has to be set up before it can be used. | |
190 | 192 | |
191 | -Without connected-channels, each message needs a recipient; often that receiver is added (“carried”) to the message | |
193 | +Without connected channels, each message needs a recipient; often that receiver is added (“carried”) to the message | |
192 | 194 | itself. |
193 | 195 | |BR| |
194 | -A big advantage is, that one does not need to create channels and end-points first -- which especially count when a low | |
195 | -number (possible one) of messages are send to the same receiver, and/or many receivers exist (which would lead to a huge | |
196 | +A big advantage is, that one does not need to create channels and end-points first -- which especially counts when a low | |
197 | +number (possible one) of messages are sent to the same receiver, and/or many receivers exist (which would lead to a huge | |
196 | 198 | number of channels). |
197 | 199 | |
198 | 200 |
@@ -200,12 +202,12 @@ | ||
200 | 202 | --------------- |
201 | 203 | |
202 | 204 | Both the writer and the reader can be *blocking* (or not); which is a facet of the function-call. A blocking reader it |
203 | -will always return when a messages is available -- and will pauze until then. | |
205 | +will always return when a message is available -- and will pauze until then. | |
204 | 206 | |BR| |
205 | -Also the write-call can be blocking: it will pauze until the message can be send -- e.g. the reader is available | |
206 | -(rendezvous) or a message-buffer is free. | |
207 | +Also, the write-call can be blocked: it will pauze until the message can be sent -- e.g. the reader is available | |
208 | +(rendezvous) or a message buffer is free. | |
207 | 209 | |
208 | -When the call is non-blocking, the call will return without waiting and yield a flag whether is was successful or not. | |
210 | +When the call is non-blocking, the call will return without waiting and yield a flag whether it was successful or not. | |
209 | 211 | Then, the developer will commonly “cycle” to poll for a profitable call; and let the task do some other/background work |
210 | 212 | as well. |
211 | 213 |
@@ -213,8 +215,8 @@ | ||
213 | 215 | Uni/Bi-Directional, Broadcast |
214 | 216 | ----------------------------- |
215 | 217 | |
216 | -Messages --or actually the channel [#channelDir]_ that transport them-- can be *unidirectional*: from sender to receiver only; | |
217 | -*bidirectional*: both sides can send and receive; or *broadcasted*: one message is send to many receivers [#anycast]_. | |
218 | +Messages --or the channel [#channelDir]_ that transports them-- can be *unidirectional*: from sender to receiver only; | |
219 | +*bidirectional*: both sides can send and receive; or *broadcasted*: one message is sent to many receivers [#anycast]_. | |
218 | 220 | |
219 | 221 | |
220 | 222 | Reliability & Order |
@@ -245,7 +247,7 @@ | ||
245 | 247 | |BR| |
246 | 248 | It’s clear that ``A`` will get `m5` and `m6` -- given that all messages (aka channels) are reliable. But there are many |
247 | 249 | ways those messages may receive in the opposite order. Presumably, even in more ways, than you can imagine. For example, |
248 | -``B1`` might processes `m4` before it process `m1`! This can happen when channel ``A->B1`` is *slow*, or when ``B2`` | |
250 | +``B1`` might process `m4` before it processes `m1`! This can happen when channel ``A->B1`` is *slow*, or when ``B2`` | |
249 | 251 | gets CPU-time before ``B1``, or... |
250 | 252 | |
251 | 253 | When we add buffering, more connected components, etc this *“network”* acts less reliable than we might aspect (even |
@@ -259,20 +261,20 @@ | ||
259 | 261 | |
260 | 262 | .. hint:: |
261 | 263 | |
262 | - As a simple example to demonstrate the advantage of a “unreliable connection”, lets consider an audio (bidirectional) | |
264 | + As a simple example to demonstrate the advantage of an “unreliable connection”, let us consider an audio (bidirectional) | |
263 | 265 | connection, that is not 100% reliable. |
264 | 266 | |BR| |
265 | - When we use it “as is”, there will be a bit of noise, and even some hick-ups. For most people this is acceptable, | |
266 | - when needed they will use phrases as *“Can you repeat that?”*. | |
267 | + When we use it “as is”, there will be a bit of noise, and even some hick-ups. For most people, this is acceptable, | |
268 | + when needed they will use phrases such as *“Can you repeat that?”*. | |
267 | 269 | |
268 | - To make that connection reliable, we need checksums, low-level conformation message, and once in a while have to sent | |
269 | - a message again. This implies some buffering (at both sides), and so the audio-stream will have a bit of delay. | |
270 | - This is a common solution for unidirectional POD-casts, and such. | |
270 | + To make that connection reliable, we need checksums, low-level confirmation messages, and once in a while have to send | |
271 | + a message again. This implies some buffering (at both sides), and so the audio stream will have a bit of delay. | |
272 | + This is a common solution for unidirectional PODcasts, and such. | |
271 | 273 | |
272 | - For a bidirectional conversation however this buffering is not satisfactory. It makes the *slow*, people have to wait | |
273 | - on each-other and will interrupted one-other. | |
274 | + For a bidirectional conversation, however, this buffering is not satisfactory. It makes the *slow*, people have to wait | |
275 | + on each other and will interrupt one other. | |
274 | 276 | |BR| |
275 | - Then, a *faster* conversation with a bit of noise is commonly preferred.a | |
277 | + Then, a *faster* conversation with a bit of noise is commonly preferred. | |
276 | 278 | |
277 | 279 | |
278 | 280 | ------------------------ |