What are the boundaries of recursion?

Given

let doAsynchronousStuff = () => {
  return new Promise(resolve => {
    setTimeout(() => {
      resolve("abcdefg"[Math.floor(Math.random() * 7)])
    }, Math.PI * 1 + Math.random())
  })
  .then(data => console.log(data))
  .then(doAsynchronousStuff)
}

Is the pattern an implementation of

  • recursion;
  • tail-call optimization;
  • iteration;
  • a non-terminating procedure that happens to refer to itself;

or; other common pattern that is not listed above?

Looking for an answer drawing from credible and/or official sources.

Answers:

Answer

I rewrote the code eliminating all non relevant stuff and using a style I feel is more readable and appeasing in this context.

function doAsynchronousStuff()
{
   return new Promise((resolve, reject) => 
   {
      setTimeout(() => {resolve("test")}, 0)
   })
  .then(console.log)
  .then(doAsynchronousStuff);
}

We should analyze the execution flow remembering that JS has an event loop, particularly

  • setTimeout posts its argument function to be executed on the next1 cycle of the event loop.
  • then posts its argument function to be executed on the next cycle of the event loop.

The existence of the event loop is important because the function posting message onto it run-to-completion before the loop is re-entered.

Also a good knowledge of promises is required, for example knowing that then returns a new promise.

When doAsynchronousStuff is executed the Promise object is constructed and its argument function is called immediately.

Execution stack                      Event loop messages

doAsynchronousStuff
Promise constructor
Closure (resolve, reject)

This is in turn calls setTimeout that post an event and returns.

Execution stack                      Event loop messages

doAsynchronousStuff                  resolve("test")
Promise constructor
Closure (resolve, reject)
setTimeout

The execution falls back to doAsynchronousStuff that set the continuations for the Promise objects but doesn't execute them of course. So in the end doAsynchronousStuff returns and we have a run-to-completion situation.

Execution stack                      Event loop messages

                                     resolve("test")

The event loop executes resolve("test") (or better the closure that contains it) which set the promise as resolved and schedule its continuation on the next cycle

 Execution stack                      Event loop messages

 resolve                              console.log

resolve ends we have again a run-to-completion situation.

 Execution stack                      Event loop messages

                                      console.log

console.log is executed. Actually, a function that calls console.log is executed, this function is set by the promise object when then is called.
When console.log returns its promise resolves and the doAsynchronousStuff is posted on the event loop.

 Execution stack                      Event loop messages

 resolve                              doAsynchronousStuff

When resolve ends we have a run-to-completion and doAsynchronousStuff is executed once again.


Now I won't dig too much in the mathematical sense nor in the CS theoretical sense of the terms in the list in your question, doing such would have no practical benefits as I don't believe this is a theoretical question.
I will instead limit myself to a programming point of view.

By the time the second doAsynchronousStuff instance is called the first one is long gone (it run-to-completion).
Basically the situation is equivalent to doing this

let f = () => { console.log('hi!'); setTimeout(f, 0); }

I wouldn't call this function recursive since recursion implies a destruction of a problem into smaller auto-similar parts.
A recursive function doesn't have to call itself directly or doesn't have to "make the stack grow" but it must be defined in terms of itself.

If it were like

let f = () => { f(); }

I'd call it (badly) recursive. So what is it?
I'd like to say that a function is recursive in the programming sense if you can't complete it without completing all the calls to itself it makes.
The first example can be completed without waiting for the subsequent invocations of f to complete, the second one instead cannot.
In my mind I call the first version of f, scheduled.

Regarding tail-call optimization, it has nothing to do with this.
TCO transform a particular kind of recursion into a loop, it is a compiler optimization not a property of the code.
The tail-call is a property of the code, but this code is not tail-call as it is not recursive in the first place.

It is also not iteration in the programming sense (while in the theoretical sense it is) since iteration is achieved with specific constructs (like for, while, goto).
The boundary is blurry here as iteration, recursion and scheduling overlaps.

Finally this is certainly the case of a non-terminating procedure that happens to refer to itself.


1 We make a simplification here, it has not to be the very next cycle, it is just a future cycle.

Answer

None of the above. The code in question is not recursive, not quite iterative (though from the point of view of the English language it is iterative, from the point of view of what we normally call iterative in programming it is not, note that from the point of view of the English language recursion is iterative but we don't say that it is in programming), since it is not recursive the phrase "tail-call-optimised" does not apply and it is not non terminating since the function ends with a return.

What it is is a function that schedules a series of functions to execute at a later time one of which is itself.

Scheduling is a design pattern. One of the oldest example of scheduling is process scheduling that operating systems do. One of the next oldest example is cron.

How scheduling works is that the run-time environment (Linux kernel, Windows kernel, the cron process, javascript) saves a "database" (which may be as simple as a linked-list or as high level as SQL or as low-tech as a text file) of some sort of reference to the code that it is supposed to run and conditions that trigger them (check out AWS Lambda service for a very high-level implementation of this idea) and periodically somehow checks to see if the condition is met then execute the code.

For OS kernels the set of conditions includes some sort of fairness algorithm to ensure that all programs get to use the CPU. For cron the condition is the time specification in crontab. For javascript the condition is the event that the callback is registered with (for setTimeout it is the timeout event).

Traditionally, if you were to write your own software for this you'd write it as a simple state machine. The following is C-like pseudocode implementing the same thing as your example above

int tick = 0;

// Assume that there is an API for registering 1ms periodic interrupt
interrupt_1ms periodic () {
    tick++;
}

int main (void) {
    int timeout = PI + rand(); // a fairly silly way to randomly select 3 or 4 ms
    char state = 0;
    char result = nul;
    char* data = "abcdefg";

    while (1) {
        if (tick >= timeout && state == 0) {
            state = 1;
            tick = 0;
            timeout = PI + rand();
        }

        switch (state) {
            case 1:
                result = data[floor(rand() * 7)];
                state = 2;
                break;
            case 2:
                printf("%c", result);
                state = 3;
                break;
            case 3:
                state = 0; // reschedule the doAsynchronousStuff
                break;
        }
    }
}

That's sort of the traditional way. What javascript does is not exactly the same but similar in concept. It still uses a forever loop as the core of the event loop but it does not run continuously (that would waste CPU time, heats up the CPU and drain batteries). Instead it blocks calling one of the asynchronous I/O API (select, poll, epoll, kqueue etc. - libuv will select at compile time) and passes control to the OS which would put the process to sleep until one of the registered I/O events is triggered.

Now, notice your code:

let doAsynchronousStuff = () => {
  return new Promise(resolve => {
    setTimeout(() => {
      resolve("abcdefg"[Math.floor(Math.random() * 7)])
    }, Math.PI * 1 + Math.random())
  })
  .then(data => console.log(data))
  .then(doAsynchronousStuff)
}

I don't know about you but to me it is significantly easier to reason about than the traditional state machine. OK, for this very simple example the C pseudocode above is quite easy to understand but consider a real-world node.js or jQuery application with tens or hundreds of complex events (in the case of traditional jQuery apps, those events may even unschedule themselves or schedule even more event handlers). As the number of events you have to handle grow what javascript gives you in its syntax becomes far more readable even though for one event a beginner who's not familiar with anonymous functions and asynchronous code may prefer my pseudo-C example.

Even old-school non-promisified callbacks are more readable than the pseudo-C code:

function doAsynchronousStuff () {
    setTimeout(function () {
      console.log("abcdefg"[Math.floor(Math.random() * 7)]);
      doAsynchronousStuff();
    }, Math.PI * 1 + Math.random());
}

So the syntax may be new (well, not that new, Lispers have been doing this sort of thing in the 70s) but the idea is old. The core concept may not be recognisable due to the syntax so don't get too distracted by the syntax. It's just scheduling to run something with a timer. And we simply call repeated scheduling "repeated scheduling" (both Google Calendar and Apple Calendar call them "repeat").

Tags

Recent Questions

Top Questions

Home Tags Terms of Service Privacy Policy DMCA Contact Us

©2020 All rights reserved.