Smalltalks related to Computer Architecture

dark_mode format_list_bulleted
Smalltalks related to Computer Architecture
visibility public group

2024-10-16: First encounter with dataflow architecture, and Manchester Prototype Dataflow Machine

On Jun 5th this year, I’ve thought that if we can analyze all the data dependence/relationship/path of the entire program, then we can execute the separated dataflows in parallel, and commit the result at the confluence node of the following dataflow. I named my implementation that have not done yet, called Paracell.

Recently I started learning about the dataflow architecture after the exploration about Interaction Net and HVM2. It was a research hotspot in 1970s and early 1980s. And the Wikipedia shown that my ideas were studied at that time.

In my thought, the main problems for application of theories are about the cost by synchronization, memory access, communication latency.

2024-10-14: Interaction Net is hard, and HVM2 is tough

Out of curiosity about Victor Taelin’s promotion about “the great parallelism” powered by Interaction Net and his implementation HVM2. I’ve tried to understand what the Interaction Net, and how does it work to work out one program’s parallelism and possibilities.

My friend @imlyzh advised me on the voice chat tonight: do not expect the Interaction Net, and the implementation — HVM2 to be practical.

The problems caused by the shared memory division and utilization of GPU were mentioned. However, my understanding of GPU is still shallow. So I could’t deeply understand and describe the specific reasons.

Latency is a difficult problem to solve in parallel computing. Most computing models that take memory access and communication into the cost. The cost of communication is much higher than that of computation, and destroying program locality is a very bad choice, which will invalidate the cache and cause unacceptable latency by memory accesses.

The latency brought by damaging the program locality is much bigger than the benefit brought by the parallelism.