Jump to content

Kahn process networks

From Wikipedia, the free encyclopedia
(Redirected from Kahn process network)

A Kahn process network (KPN, or process network) is a distributed model of computation in which a group of deterministic sequential processes communicate through unbounded first in, first out channels. The model requires that reading from a channel is blocking while writing is non-blocking. Due to these key restrictions, the resulting process network exhibits deterministic behavior that does not depend on the timing of computation nor on communication delays.

Kahn process networks were originally developed for modeling parallel programs, but have proven convenient for modeling embedded systems, high-performance computing systems, signal processing systems, stream processing systems, dataflow programming languages, and other computational tasks. KPNs were introduced by Gilles Kahn in 1974.[1]

A Kahn process network with three processes (vertices) and three communication channels (edges). During its execution, the process P reads from channels A and B and writes to channel C.

Execution model

[edit]

KPN is a common model for describing signal processing systems where infinite streams of data are incrementally transformed by processes executing in sequence or parallel. Despite parallel processes, multitasking or parallelism are not required for executing this model.

In a KPN, processes communicate via unbounded FIFO channels. Processes read and write atomic data elements, alternatively called tokens, from and to channels. Writing to a channel is non-blocking, i.e. it always succeeds and does not stall the process, while reading from a channel is blocking, i.e. a process that reads from an empty channel will stall and can only continue when the channel contains sufficient data items (tokens). Processes are not allowed to test an input channel for existence of tokens without consuming them. A FIFO cannot be consumed by multiple processes, nor can multiple processes write to a single FIFO. Given a specific input (token) history for a process, the process must be deterministic so that it always produces the same outputs (tokens). Timing or execution order of processes must not affect the result and therefore testing input channels for tokens is forbidden.

Notes on processes

[edit]
  • A process need not read any input or have any input channels as it may act as a pure data source
  • A process need not write any output or have any output channels
  • Testing input channels for emptiness (or non-blocking reads) could be allowed for optimization purposes, but it should not affect outputs. It can be beneficial and/or possible to do something in advance rather than wait for a channel. For example, assume there were two reads from different channels. If the first read would stall (wait for a token) but the second read could succeed directly, it could be beneficial to read the second one first to save time, because the reading itself often consumes some time (e.g. time for memory allocation or copying).

Process firing semantics as Petri nets

[edit]
Firing semantics of process P modeled with a Petri net displayed in the image above

Assuming process P in the KPN above is constructed so that it first reads data from channel A, then channel B, computes something and then writes data to channel C, the execution model of the process can be modeled with the Petri net shown on the right.[2] The single token in the PE resource place forbids the process from executing simultaneously for different input data. When data arrives at channel A or B, tokens are placed into places FIFO A and FIFO B respectively. The transitions of the Petri net are associated with the respective I/O operations and computation. When the data has been written to channel C, PE resource is filled with its initial marking again allowing new data to be read.

Process as a finite state machine

[edit]
A finite state machine of a process

A process can be modeled as a finite state machine that is in one of two states:

  • Active; the process computes or writes data
  • Wait; the process is blocked (waiting) for data

Assuming the finite state machine reads program elements associated with the process, it may read three kinds of tokens, which are "Compute", "Read" and "Write token". Additionally, in the Wait state it can only come back to Active state by reading a special "Get token" which means the communication channel associated with the wait contains readable data.

Properties

[edit]

Boundedness of channels

[edit]

A channel is strictly bounded by if it has at most unconsumed tokens for any possible execution. A KPN is strictly bounded by if all channels are strictly bounded by .

The number of unconsumed tokens depends on the execution order (scheduling) of processes. A spontaneous data source could produce arbitrarily many tokens into a channel if the scheduler would not execute processes consuming those tokens.

A real application can not have unbounded FIFOs and therefore scheduling and maximum capacity of FIFOs must be designed into a practical implementation. The maximum capacity of FIFOs can be handled in several ways:

  • FIFO bounds can be mathematically derived in design to avoid FIFO overflows. This is however not possible for all KPNs. It is an undecidable problem to test whether a KPN is strictly bounded by .[citation needed] Moreover, in practical situations, the bound may be data dependent.
  • FIFO bounds can be grown on demand.[3]
  • Blocking writes can be used so that a process blocks if a FIFO is full. This approach may unfortunately lead to an artificial deadlock unless the designer properly derives safe bounds for FIFOs (Parks, 1995). Local artificial detection at run-time may be necessary to guarantee the production of the correct output.[4]

Closed and open systems

[edit]

A closed KPN has no external input or output channels. Processes that have no input channels act as data sources and processes that have no output channels act as data sinks. In an open KPN each process has at least one input and output channel.

Determinism

[edit]

Processes of a KPN are deterministic. For the same input history they must always produce exactly the same output. Processes can be modeled as sequential programs that do reads and writes to ports in any order or quantity as long as determinism property is preserved. As a consequence, KPN model is deterministic so that following factors entirely determine outputs of the system:

  • processes
  • the network
  • initial tokens

Hence, timing of the processes does not affect outputs of the system.

Monotonicity

[edit]

KPN processes are monotonic. Reading more tokens can only lead to writing more tokens. Tokens read in the future can only affect tokens written in the future. In a KPN there is a total order of events[clarification needed] inside a signal.[clarification needed] However, there is no order relation between events in different signals. Thus, KPNs are only partly ordered, which classifies them as an untimed model.

Applications

[edit]

Due to its high expressiveness and succinctness, KPNs as underlying the model of computation are applied in several academic modeling tools to represent streaming applications, which have certain properties (e.g., dataflow-oriented, stream-based).

The open source Daedalus framework[5] maintained by Leiden Embedded Research Center at Leiden University accepts sequential programs written in C and generates a corresponding KPN. This KPN could, for example, be used to map the KPN onto an FPGA-based platform systematically.

The Ambric Am2045 massively parallel processor array is a KPN implemented in actual silicon.[6] Its 336 32-bit processors are connected by a programmable interconnect of dedicated FIFOs. Thus its channels are strictly bounded with blocking writes.

The AI Engine's in some AMD Xilinx Versals are building blocks of a Kahn Process Network.[7]

See also

[edit]

References

[edit]
  1. ^ Kahn, G. (1974). Rosenfeld, Jack L. (ed.). The semantics of a simple language for parallel programming (PDF). Proc. IFIP Congress on Information Processing. North-Holland. ISBN 0-7204-2803-3.
  2. ^ Bernardeschi, C.; De Francesco, N.; Vaglini, G. (1995). "A Petri nets semantics for data flow networks". Acta Informatica. 32 (4): 347–374. doi:10.1007/BF01178383.
  3. ^ Parks, Thomas M. (1995). Bounded Scheduling of Process Networks (Ph. D.). University of California, Berkeley.
  4. ^ Geilen, Marc; Basten, Twan (2003). Degano, P. (ed.). Requirements on the Execution of Kahn Process Networks. Proc. 12th European Symposium on Programming Languages and Systems (ESOP). Springer. pp. 319–334. CiteSeerX 10.1.1.12.7148.
  5. ^ http://daedalus.liacs.nl LIACS Daedalus framework
  6. ^ Mike Butts, Anthony Mark Jones, Paul Wasson, "A Structural Object Programming Model, Architecture, Chip and Tools for Reconfigurable Computing", Proceedings of FCCM, April 2007, IEEE Computer Society
  7. ^ AMD Xilinx UG1076 (v2022.2) October 19, 2022 AI Engine Tools and Flows, p.11

Further reading

[edit]
  • Lee, E.A.; Parks, T.M. (1995). "Dataflow process networks" (PDF). Proceedings of the IEEE. 83 (5): 773–801. doi:10.1109/5.381846. ISSN 0018-9219. Retrieved 2019-02-13.
  • Josephs, Mark B. (2005). "Models for Data-Flow Sequential Processes". In Abdallah, Ali E.; Jones, Cliff B.; Sanders, Jeff W. (eds.). Communicating Sequential Processes. The First 25 Years: Symposium on the Occasion of 25 Years of CSP, London, UK, July 7-8, 2004. Revised Invited Papers. Lecture Notes in Computer Science. Vol. 3525. Berlin, Heidelberg: Springer Berlin Heidelberg. pp. 85–97. CiteSeerX 10.1.1.60.5694. doi:10.1007/11423348_6. ISBN 978-3-540-32265-8.