• R/O
  • SSH

Commit

Tags
Aucun tag

Frequently used words (click to add to your profile)

javac++androidlinuxc#windowsobjective-ccocoa誰得qtpythonphprubygameguibathyscaphec計画中(planning stage)翻訳omegatframeworktwitterdomtestvb.netdirectxゲームエンジンbtronarduinopreviewer

Commit MetaInfo

Révision97e5918d9359de140ebd9690fd6d103275121e68 (tree)
l'heure2022-09-14 00:42:28
AuteurAlbert Mietus < albert AT mietus DOT nl >
CommiterAlbert Mietus < albert AT mietus DOT nl >

Message de Log

AsIs: a bit of grammerly

Change Summary

Modification

diff -r 4e25c21303e9 -r 97e5918d9359 CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst
--- a/CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst Sun Sep 11 01:30:40 2022 +0200
+++ b/CCastle/2.Analyse/8.ConcurrentComputingConcepts.rst Tue Sep 13 17:42:28 2022 +0200
@@ -10,14 +10,14 @@
1010 :category: Castle DesignStudy
1111 :tags: Castle, Concurrency, DRAFT§
1212
13- Sooner as we may realize even embedded systems will have many, many cores; as I described in
13+ Sooner as we realize, even embedded systems will have piles & heaps of cores, as I described in
1414 “:ref:`BusyCores`”. Castle should make it easy to write code for all of them: not to keep them busy, but to maximize
15- speed up [useCase: :need:`U_ManyCore`]. There I also showed that threads_ do not scale well for CPU-bound (embedded)
15+ speed up [useCase: :need:`U_ManyCore`]. I also showed that threads_ do not scale well for CPU-bound (embedded)
1616 systems. Last, I introduced some (more) concurrency abstractions. Some are great, but they often do not fit
1717 nicely in existing languages.
1818
19- Still, as Castle is a new language we have the opportunity to select such a concept and incorporate it into the
20- language ...
19+ Still, as Castle is a new language, we have the opportunity to select such a concept and incorporate it into the
20+ language.
2121 |BR|
2222 In this blog, we explore a bit of theory. I will focus on semantics and the possibilities to implement them
2323 efficiently. The exact syntax will come later.
@@ -25,9 +25,9 @@
2525 Basic terminology
2626 =================
2727
28-There are many theories available and some more practical expertise but they hardly share a common vocabulary.
29-For that reason, let’s describe some basic terms, that will be used in these blogs. As always, we use Wikipedia as common
30-ground and add links for a deep dive.
28+Many theories are available, as is some more practical expertise, and hardly non of them share a common vocabulary. For
29+that reason,I first describe some basic terms, and how they are used in these blogs. As always, we use Wikipedia
30+as common ground and add links for a deep dive.
3131 |BR|
3232 Again, we use ‘task’ as the most generic term for work-to-be-executed; that can be (in) a process, (on) a thread, (by) a
3333 computer, etc.
@@ -88,10 +88,10 @@
8888 |BR|
8989 There are two main approaches: shared-data of message-passing; we will introduce them below.
9090
91-Communication takes time, especially *wall time* [#wall-time]_ (or clock time) and may slow down computing. Therefore
91+Communication takes time, especially *wall time* [#wall-time]_ (or clock time), and may slow down computing. Therefore
9292 communication has to be efficient. This is an arduous problem and becomes harder when we have more communication, more
93-concurrency, more parallelism, and/or those tasks are short(er)living. Or better: it depends on the ratio between the
94-communication-time and the time-between-two-communications.
93+concurrency, more parallelism, and/or those tasks are short living. Or better: it depends on the ratio of
94+time-between-communications and the time-between-two-communications.
9595
9696
9797 Shared Memory
@@ -141,58 +141,60 @@
141141 Messaging Aspects
142142 =================
143143
144-There are many variant on messaging, mostly combinations some fundamental aspects. Let mentions some basic ones.
145-|BR| In :ref:`MPA-examples` some existing messaging passing systems are classified in those therms, for those that do
146-prefer a more practical characterisation.
144+There are many variants of messaging, mostly combinations of some fundamental aspects. Let mentions some basic ones.
145+|BR|
146+In :ref:`MPA-examples` some existing messaging passing systems are classified in those terms, for those that do prefer
147+a more practical characterisation.
147148
148149
149150
150151 (A)Synchronous
151152 --------------
152153
153-**Synchronous** messages resembles normal function-calls. Typically a “question” is send, the call awaits the
154-answer-messages, and that answer is returned. This can be seen as a layer on top of the more fundamental send/receive
155-calls. An famous example is RPC_: the Remote Procedure Call.
154+**Synchronous** messages resemble normal function calls. Typically a “question” is sent, the call awaits the
155+answer message, and that answer is returned. This can be seen as a layer on top of the more fundamental send/receive
156+calls. A famous example is RPC_: the Remote Procedure Call.
156157
157-**Asynchronous** messages are more basic: a task send a messages (to somebody else) and continues. That message can be
158-“data”, an “event:, a “commands” or a “query”. Only in the latter case some responds is essental. With async messages,
159-there is no desire that to get the answer immediately.
158+**Asynchronous** messages are more basic: a task sends a message and continues. That message can be “data”, an “event”,
159+a “command”, or a “query”. Only in the latter case, some response is essential. With async messages, there is no desire
160+to get the answer immediately.
160161
161162 As an example: A task can send many queries (and/or other messages) to multiple destinations at once, then go into
162-*listen-mode*, and handle the replies in the order the are received (which can be different then send-order). Typically,
163-this speeds-up (wall) time, and is only possible with async messages. Notice: the return messages need to carry an “ID”
163+*listen-mode*, and handle the replies in the order they are received (which can be different than as sent). Typically,
164+this speeds up (wall) time, and is only possible with async messages. Notice: the return messages need to carry an “ID”
164165 of the initial messages to keep track -- often that is the query itself.
165166
166167
167168 (Un)Buffered
168169 ------------
169170
170-Despide it’s is not truly a characteristic of the messages itself, messages can be *buffered*, or not. It is about
171-piping, transporting the message: can this “connection” (see below) *contain/save/store* messages? When there is no
172-storage at all the writer and reader needs to rendezvous: send and receive at the same (wall) time.
171+Despite it is not truly a characteristic of the message itself, messages can be *buffered*, or not. It is about
172+the plumbing to transport the message: can this “connection” (see below) *contain/save/store* messages? When there is no
173+storage at all the writer and reader need to rendezvous: send and receive at the same (wall) time.
173174 |BR|
174-With a buffer (often depicted as a queue) multiple messages may be sent, before they need to be picked-up by the
175+With a buffer (often depicted as a queue) multiple messages may be sent before they need to be picked up by the
175176 receiver; the number depends on the size of the buffer.
176177
177-Note: this is always asymmetric; messages need to be send before the can be read.
178+Note: this is always asymmetric; messages need to be sent before they can be read.
178179
179180 Connected Channels (or not)
180181 ---------------------------
181182
182-Messages can be send over (pre-) *connected channels* or to freely addressable end-points. Some people use the term “connection
183-oriented” for those connected-channels, others use the term “channel” more generic and for any medium that is
184-transporting messages. I try to use “*connected-channel”* when is a *pre connected* channel.
183+Messages can be sent over (pre-) *connected channels* or to freely addressable end-points. Some people use the term
184+“connection-oriented” for those connected channels, others use the term “channel” more generic and for any medium that
185+is
186+transporting messages. I try to use “*connected-channel”* when is a *pre-connected* channel.
185187
186-When using connected-channels, one writes the message to the channel; there is no need to add the receiver to the
188+When using connected channels, one writes the message to the channel; there is no need to add the receiver to the
187189 message. Also when reading, the sender is clear.
188190 |BR|
189-Clearly, the channel has to be set-up before it can be used.
191+Clearly, the channel has to be set up before it can be used.
190192
191-Without connected-channels, each message needs a recipient; often that receiver is added (“carried”) to the message
193+Without connected channels, each message needs a recipient; often that receiver is added (“carried”) to the message
192194 itself.
193195 |BR|
194-A big advantage is, that one does not need to create channels and end-points first -- which especially count when a low
195-number (possible one) of messages are send to the same receiver, and/or many receivers exist (which would lead to a huge
196+A big advantage is, that one does not need to create channels and end-points first -- which especially counts when a low
197+number (possible one) of messages are sent to the same receiver, and/or many receivers exist (which would lead to a huge
196198 number of channels).
197199
198200
@@ -200,12 +202,12 @@
200202 ---------------
201203
202204 Both the writer and the reader can be *blocking* (or not); which is a facet of the function-call. A blocking reader it
203-will always return when a messages is available -- and will pauze until then.
205+will always return when a message is available -- and will pauze until then.
204206 |BR|
205-Also the write-call can be blocking: it will pauze until the message can be send -- e.g. the reader is available
206-(rendezvous) or a message-buffer is free.
207+Also, the write-call can be blocked: it will pauze until the message can be sent -- e.g. the reader is available
208+(rendezvous) or a message buffer is free.
207209
208-When the call is non-blocking, the call will return without waiting and yield a flag whether is was successful or not.
210+When the call is non-blocking, the call will return without waiting and yield a flag whether it was successful or not.
209211 Then, the developer will commonly “cycle” to poll for a profitable call; and let the task do some other/background work
210212 as well.
211213
@@ -213,8 +215,8 @@
213215 Uni/Bi-Directional, Broadcast
214216 -----------------------------
215217
216-Messages --or actually the channel [#channelDir]_ that transport them-- can be *unidirectional*: from sender to receiver only;
217-*bidirectional*: both sides can send and receive; or *broadcasted*: one message is send to many receivers [#anycast]_.
218+Messages --or the channel [#channelDir]_ that transports them-- can be *unidirectional*: from sender to receiver only;
219+*bidirectional*: both sides can send and receive; or *broadcasted*: one message is sent to many receivers [#anycast]_.
218220
219221
220222 Reliability & Order
@@ -245,7 +247,7 @@
245247 |BR|
246248 It’s clear that ``A`` will get `m5` and `m6` -- given that all messages (aka channels) are reliable. But there are many
247249 ways those messages may receive in the opposite order. Presumably, even in more ways, than you can imagine. For example,
248-``B1`` might processes `m4` before it process `m1`! This can happen when channel ``A->B1`` is *slow*, or when ``B2``
250+``B1`` might process `m4` before it processes `m1`! This can happen when channel ``A->B1`` is *slow*, or when ``B2``
249251 gets CPU-time before ``B1``, or...
250252
251253 When we add buffering, more connected components, etc this *“network”* acts less reliable than we might aspect (even
@@ -259,20 +261,20 @@
259261
260262 .. hint::
261263
262- As a simple example to demonstrate the advantage of a “unreliable connection”, lets consider an audio (bidirectional)
264+ As a simple example to demonstrate the advantage of an “unreliable connection”, let us consider an audio (bidirectional)
263265 connection, that is not 100% reliable.
264266 |BR|
265- When we use it “as is”, there will be a bit of noise, and even some hick-ups. For most people this is acceptable,
266- when needed they will use phrases as *“Can you repeat that?”*.
267+ When we use it “as is”, there will be a bit of noise, and even some hick-ups. For most people, this is acceptable,
268+ when needed they will use phrases such as *“Can you repeat that?”*.
267269
268- To make that connection reliable, we need checksums, low-level conformation message, and once in a while have to sent
269- a message again. This implies some buffering (at both sides), and so the audio-stream will have a bit of delay.
270- This is a common solution for unidirectional POD-casts, and such.
270+ To make that connection reliable, we need checksums, low-level confirmation messages, and once in a while have to send
271+ a message again. This implies some buffering (at both sides), and so the audio stream will have a bit of delay.
272+ This is a common solution for unidirectional PODcasts, and such.
271273
272- For a bidirectional conversation however this buffering is not satisfactory. It makes the *slow*, people have to wait
273- on each-other and will interrupted one-other.
274+ For a bidirectional conversation, however, this buffering is not satisfactory. It makes the *slow*, people have to wait
275+ on each other and will interrupt one other.
274276 |BR|
275- Then, a *faster* conversation with a bit of noise is commonly preferred.a
277+ Then, a *faster* conversation with a bit of noise is commonly preferred.
276278
277279
278280 ------------------------