Warning, /frameworks/threadweaver/docs/whymultithreading.md is written in an unsupported language. File is not indexed.

0001 Why Multithreading?     {#multithreading}
0002 ===================
0003 
0004 In the past, multithreading has been considered a powerful tool that
0005 is hard to handle (some call it the work of the devil). While there may
0006 be some truth to this, newer tools have made the job of the software
0007 developer much easier when creating parallel implementations of
0008 algorithms. At the same time, the necessity to use multiple threads to
0009 create performant applications has become more and more
0010 clear. Technologies like Hyperthreading and multiple core processors can
0011 only be used if processors have to schedule processing power to
0012 multiple, concurrently running processes or threads.
0013 
0014 Event driven programs especially bear the issue of latency, which is
0015 more important for the user's impression of application performance than
0016 other factors. But the responsiveness of the user interface relies
0017 mainly on the ability of the application to process events, an ability
0018 that is much limited if the application is executing processing
0019 power expensive, lengthy operations. This leads, for example, to delayed
0020 or sluggish processing of necessary paint events. Even if this does not influence the total time necessary to perform the operation the
0021 user requested, it is annoying and not state of the art.
0022 
0023 There are different approaches to solve this issue. The crudest one
0024 may be to process single or multiple events while performing a lengthy
0025 operation, which may or may not work sufficiently, but is at least sure
0026 to ruin all efforts of Separation of Concerns. Concerns can simply not
0027 be separated if the developer has to intermingle instructions with event
0028 handling where the kind of events that are
0029 processed are not known in advance.
0030 
0031 Another approach is to use event-controlled asynchronous
0032 operations. This is sufficient in most cases, but still causes a number
0033 of issues. Any operation that carries the possibility of taking a long
0034 time or blocking may still stop event processing for a while. Such risks
0035 are hard to assess, and especially hard to test in laboratory
0036 environments where networks are fast and reliable and the system I/O
0037 load is generally low. Network operations may fail. Hard disks may be
0038 suspended. The I/O subsystem may be so busy that transferring 2 kByte may
0039 take a couple of seconds.
0040 
0041 Processing events in objects that are executed in other threads is
0042 another approach. There are other issues that come with parallel
0043 programming, but it does ensure the main event loop returns as soon as
0044 possible. Usually this approach is combined with a state pattern to
0045 synchronize the GUI with the threaded event processing.
0046 
0047 Which one of these approaches is suitable for a specific case has to
0048 be assessed by the application developers. There is no silver
0049 bullet. All have specific strengths, weaknesses and issues. The
0050 ThreadWeaver library provides the means to implement multithreaded job
0051 oriented solutions.
0052 
0053 To create performant applications, the application designers have to
0054 leverage the functionality provided by the hardware platform as much as
0055 possible. While code optimizations only lead to slight improvement,
0056 application performance is usually determined by network and I/O
0057 throughput. The CPU time needed is usually negligible. At the same time,
0058 the different hardware subsystems usually are independent in modern
0059 architectures. Network, I/O and memory interfaces can transfer data all
0060 at the same time, and the CPU is able to process instructions while all
0061 these subsystems are busy. The modern computer is not a traditional
0062 uniprocessor (think of GPUs, too). This makes it necessary to use all
0063 these parallel subsystems at the same time as much as possible to
0064 actually use the possibilities modern hardware provides, which is very
0065 hard to achieve in a single thread.
0066 
0067 Another very important issue is application processing
0068 flow. Especially GUI applications do not follow the traditional
0069 imperative programming pattern. Execution flow is more network-like,
0070 with chunks of code that depend on others to finish processing before
0071 they can touch their data. Tools to represent those
0072 networks to set up your applications order of execution are rare, and
0073 usually leave it to the developers to code the execution order of the
0074 instructions. This solutions are usually not flexible and do not adapt
0075 to the actual usage of the CPU nodes and computer
0076 subsystems. ThreadWeaver provides means to represent code execution
0077 dependencies and relies on the operating systems scheduler to actually
0078 distribute the work load. The result is an implementation that is very
0079 close to the original application semantics, and usually improved
0080 performance and scalability in different real-life scenarios.
0081 
0082 The more tasks are handled in parallel, the more memory is
0083 necessary. There is a permanent CPU - memory tradeoff which limits the
0084 number of parallel operations to the extent where memory that needs to
0085 be swapped in and out slows down the operations. Therefore memory usage
0086 needs to be equalized to allow the processors to operate without being
0087 slowed down. This means parallel operations need to be scheduled to a
0088 limit to balance CPU and memory usage. ThreadWeaver provides the means
0089 to do that.
0090 
0091 In general, ThreadWeaver tries to make the task of creating
0092 multithreaded, performant applications as simple as
0093 possible. Programmers should be relieved of synchronization, execution
0094 dependency and load balancing issues as much as possible. The API
0095 tends to be clean, extensible and easy to understand.