# In concurrent.py, class Future:
def add_done_callback(self, fn):
from tornado.ioloop import IOLoop
So self._finish_request is called before self._on_write_complete,
_clear_callbacks() is called, which resets self._write_callbacks.
When control returns to the event loop, _on_write_complete is finally,
called, which finds self._write_future set to None, and won't trigger
the future that will permit RequestHandler.flush() to continue.
If there is still data to write, self._pending_write is not done when
future_add_done_callback is called, and RequestHandler.flush() won't get
On Mon, Jan 27, 2020 at 09:46:35PM +0100, Enrico Zini wrote:
> If there is still data to write, self._pending_write is not done when
> future_add_done_callback is called, and RequestHandler.flush() won't get
Keeping the write buffer nonempty is not sufficient: if it remains with
a few data, the problem is triggered anyway, as it will managed to get
written right away and set the _pending_write future to done() before
reaching the event loop, triggering the hang again.
Since this depends on whether there is enough pending data to write to
cause the buffer to block, it's hard to predict what is a reasonable
amount of data to leave in the pipeline as a workaround.
Meaning, I currently cannot find a way to await finish() in tornado 5.
It would be important for me to do so, as it's the only way I have to be
able to log an outcome of a request, figuring out for example if the
remote has closed the connection early.
- - -
To answer Julien Puydt, this is fixed upstream in Tornado 6, which is
not in Debian yet.
I would call this by now a debian-specific bug, and I think it's useful
to document it here, rather than waste upstream's time with an issue
they have fixed in a version releases a year ago.