5 comments

  • jerf2 hours ago
    While that speed increase is real, of course, you&#x27;re really just looking at the general speed delta between Python and C there. To be honest I&#x27;m a bit surprised you didn&#x27;t get another factor of 2 or 3.<p>&quot;Cimba even processed more simulated events per second on a single CPU core than SimPy could do on all 64 cores&quot;<p>One of the reasons I don&#x27;t care in the slightest about Python &quot;fixing&quot; the GIL. When your language is already running at a speed where a compiled language can be quite reasonably expected to outdo your performance on 32 or 64 cores on a single core, who really cares if removing the GIL lets me get twice the speed of an unthreaded program in Python by running on 8 cores? If speed was important you shouldn&#x27;t have been using pure Python.<p>(And let me underline that <i>pure</i> in &quot;pure Python&quot;. There are many ways to be in the Python ecosystem but not be running Python. Those all have their own complicated cost&#x2F;benefit tradeoffs on speed ranging all over the board. I&#x27;m talking about pure Python here.)
    • ambonvik1 hour ago
      Good point. The profiler tells me that the context switch between coroutines is the most time-consuming part, even if I tried to keep it as light as possible, so I guess the explanation for &quot;only&quot; getting 45x speed improvement rather than 100x is that it is spending a significant part of the time moving register content to and from memory.<p>Any ideas for how to speed up the context switches would be welcome, of course.
  • anematode1 hour ago
    Looks really cool and I&#x27;m going to take a closer look tonight!<p>How do you do the context switching between coroutines? getcontext&#x2F;setcontext, or something more architecture specific? I&#x27;m currently working on some stackful coroutine stuff and the swapcontext calls actually take a fair amount of time, so I&#x27;m planning on writing a custom one that doesn&#x27;t preserve unused bits (signal mask and FPU state). So I&#x27;m curious about your findings there
    • ambonvik1 hour ago
      Hi, it is hand-coded assembly. Pushing all necessary registers to the stack (including GS on Windows), swapping the stack pointer to&#x2F;from memory, popping the registers, and off we go on the other stack. I save FPU flags, but not more FPU state than necessary (which again is a whole lot more on Windows than on Linux).<p>Others have done this elsewhere, of course. There are links&#x2F;references to several other examples in the code. I mention two in particular in the NOTICE file, not because I copied their code, but because I read it very closely and followed the outline of their examples. It would probably taken me forever to figure out the Windows TIB on my own.<p>What I think is pretty cool (biased as I am) in my implementation is the «trampoline» that launches the coroutine function and waits silently in case it returns. If it does, it is intercepted and the proper coroutine exit() function gets called.
      • anematode1 hour ago
        Interesting. How does the trampoline work?<p>I&#x27;m wondering whether we could further decrease the overhead of the switch on GCC&#x2F;clang by marking the push function with `__attribute__((preserve_none))`. Then among GPRs we only need to save the base and stack pointers, and the callers will only save what they need to
        • ambonvik55 minutes ago
          It is an assembly function that does not get called from anywhere. I pre-load the stack image with its intended register content from C, including the trampoline function address as the &quot;return address&quot;. On the first transfer to the newly created coroutine, that gets loaded, which in turn calls the coroutine function that suddenly is in one of its registers along with its arguments. If the coroutine function ever returns, that just continues the trampoline function, which proceeds to call the coroutine_exit() function, whose address also just happens to be stored in another handy register.<p><a href="https:&#x2F;&#x2F;github.com&#x2F;ambonvik&#x2F;cimba&#x2F;blob&#x2F;main&#x2F;src&#x2F;port&#x2F;x86-64&#x2F;linux&#x2F;cmi_coroutine_context.asm" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ambonvik&#x2F;cimba&#x2F;blob&#x2F;main&#x2F;src&#x2F;port&#x2F;x86-64&#x2F;...</a><p><a href="https:&#x2F;&#x2F;github.com&#x2F;ambonvik&#x2F;cimba&#x2F;blob&#x2F;main&#x2F;src&#x2F;port&#x2F;x86-64&#x2F;linux&#x2F;cmi_coroutine_context.c" rel="nofollow">https:&#x2F;&#x2F;github.com&#x2F;ambonvik&#x2F;cimba&#x2F;blob&#x2F;main&#x2F;src&#x2F;port&#x2F;x86-64&#x2F;...</a>
          • anematode47 minutes ago
            Ahhh ok. Cool!<p>Do sanitizers (ASan&#x2F;UBSan&#x2F;valgrind) still work in this setting? Also I&#x27;m wondering if you&#x27;ll need some special handling if Intel CET is enabled
            • ambonvik37 minutes ago
              They probably do, but I have not used them. My approach has been &quot;offensive programming&quot;, to put in asserts for preconditions, invariants, and postconditions wherever possible. If anything starts to smell, I&#x27;d like to stop it cold and fix it rather than trying to figure it out later. With two levels of concurrency in a shared memory space it could get hairy fast if bugs were allowed to propagate to elsewhere before crashing something.<p>Not familiar with the details of Intel CET, but this is basically just implementing what others call fibers or &quot;green threads&quot;, so any such special handling should certainly be possible if necessary.
              • anematode21 minutes ago
                Cool. I have faith in thorough testing for the coroutines&#x27; correctness, but sanitizers would be convenient for people debugging their own code that leverages this library. I know that ASan doesn&#x27;t support getcontext <i>et al.</i>, but maybe this is different
  • sovande1 hour ago
    Didn’t read the code yet, but stuff like this tend to be brittle. Do you do something clever around stack overflow, function return overwrite or would that just mess up all coroutines using the same stack?
    • ambonvik1 hour ago
      Each coroutine is running on its own stack. They are fixed size stacks, at least for now, so that could be a tender point, but I place some sentinel values at the end to try to capture it in an assert() instead of just letting it crash. I did not think it worth the effort and speed penalty to implement growing stacks yet. However, I do catch any coroutine function returns safely instead of letting them fall off the end of their stack.
  • quibono2 hours ago
    I don&#x27;t know enough about event simulation to talk API design in depth but I find the stackful coroutine approach super interesting so I&#x27;ll be taking a look at the code later!<p>Do you plan on accepting contributions or do you see the repo as being a read-only source?
    • ambonvik2 hours ago
      I would be happy accepting contributions, especially for porting to additional architectures. I think the dependency is relatively well encapsulated (see src&#x2F;port), but code for additional architectures needs to be well tested on the actual platform, and there are limits to how much hardware fits on my desk.
  • qotgalaxy2 hours ago
    [dead]