| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
| |
- New libuv/pty process abstraction with simplified API and no globals.
- Remove nvim/os/job*. Jobs are now a concept that apply only to programs
spawned by vimscript job* functions.
- Refactor shell.c/channel.c to use the new module, which brings a number of
advantages:
- Simplified API, less code
- No slots in the user job table are used
- Not possible to acidentally receive data from vimscript
- Implement job table in eval.c, which is now a hash table with unilimited job
slots and unique job ids.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Simplify RStream/WStream API and make it more consistent with libuv.
- Move into the event loop layer(event subdirectory)
- Remove uv_helpers module.
- Simplify job/process internal modules/API.
- Unify RStream and WStream into a single structure. This is necessary because
libuv streams can be readable and writable at the same time(and because the
uv_helpers.c hack to associate multiple streams with libuv handle was removed)
- Make struct definition public, allowing more flexible/simple memory
management by users of the module.
- Adapt channel/job modules to cope with the changes.
|
|
|
|
|
|
|
|
|
|
| |
- Add event loop abstraction module under src/nvim/event. The
src/nvim/event/loop module replaces src/nvim/os/event
- Remove direct dependency on libuv signal/timer API and use the new abstraction
instead.
- Replace all references to uv_default_loop() by &loop.uv, a new global variable
that wraps libuv main event loop but allows the event loop functions to be
reused in other contexts.
|
|
|
|
|
|
|
|
|
| |
These macros would never return true since the preceding waitpid() call
did not specify the WUNTRACED or WCONTINUED options (which is correct
since we only care for processes that exited here).
Besides removing dead code, this improves portability since WIFCONTINUED
is not defined on all platforms.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extract the RBuffer class from rstream.c and reimplement it as a ring buffer,
a more efficient version that doesn't need to relocate memory.
The old rbuffer_read/rbuffer_write interfaces are kept for simple
reading/writing, and the RBUFFER_UNTIL_{FULL,EMPTY} macros are introduced to
hide wrapping logic when more control is required(such as passing the buffer
pointer to a library function that writes directly to the pointer)
Also add a basic infrastructure for writing helper C files that are only
compiled in the unit test library, and use this to write unit tests for RBuffer
which contains some macros that can't be accessed directly by luajit.
Helped-by: oni-link <knil.ino@gmail.com>
Reviewed-by: oni-link <knil.ino@gmail.com>
Reviewed-by: Scott Prager <splinterofchaos@gmail.com>
Reviewed-by: Justin M. Keyes <justinkz@gmail.com>
Reviewed-by: Michael Reed <m.reed@mykolab.com>
|
|
|
|
|
| |
The sys/wait.h include was moved after the vim.h include, since the include
guards are defined in config.h the guards cannot be used earlier.
|
| |
|
| |
|
|
|
|
|
|
| |
Since all reads are queued by the event loop, we must also queue the exit event,
or else the process_close function can close the job streams before received
data is processed.
|
|
|
|
|
| |
Fix pointer passed to the handles in the uv_close() calls when process_spawn()
fails.
|
|
|
|
|
| |
Add a SIGCHLD handler for cleaning up pty processes passing the WNOHANG flag. It
may also be used to cleanup processes spawned with uv_spawn.
|
| |
|
|
|
|
|
|
|
|
| |
- process spawning was decoupled from the rest of the job control logic. The
goal is reusing it for spawning processes connected to pseudo terminal file
descriptors.
- job_start now receives a JobOptions structure containing all the startup
options.
|
|
|
|
|
| |
Send sigterm immediately since it can be caught by processes. If they don't
respond and are still alive after a while, SIGKILL will be sent.
|
|
|
|
|
|
|
|
|
|
| |
A blocking call job_wait(job, -1) can only return after job is finished
and all handles of job are closed. But hitting CTRL-C makes job_wait()
return early while handles can still be open. This can lead to problems
with the job/handle callbacks if the caller (of job_wait()) already
freed the memory that is used in the job callbacks.
To fix this, only return after all handles of the job are closed.
|
|
|
|
|
| |
The argument argv of job_start() and channel_from_job() will be
freed. Mark them as such in the comments of this functions.
|
|
|
|
|
| |
If a new job cannot be started because no slots are free, we return early
without freeing the argv argument.
|
|
|
|
|
| |
stdout/stderr should only be closed after the job truly exits, or else we can
lose data sent by it.
|
|
|
|
|
| |
Remove the current teardown logic and reuse the job top timers with
event_poll_until all jobs exit or are killed.
|
| |
|
|
|
|
|
|
|
| |
Use a timer to periodically compare the current HR time against the HR time of
when `job_stop` was called. After 1 second, send SIGTERM, after 2 seconds, send
SIGKILL. The timer is only active when there's at least one `job_stop` call
pending.
|
|
|
|
|
|
|
|
| |
Passing NULL as the callback for stdout/stderr will result in job_start ignoring
stdout/stderr, respectively. A 'writable' boolean argument was also added, and
when false `job_start` will ignore stdin.
Also, refactor os_system to allow passing NULL as the `output` argument.
|
|
|
|
|
|
|
| |
Commit @709685b4612f4 removed the close_job_* calls when uv_spawn fails because
of memory errors when trying to cleanup unitialized {R,W}Stream instances, but
the uv_pipe_t instances must be closed because they are added to the event loop
queue by previous `uv_pipe_init()` calls
|
|
|
|
|
|
|
|
|
| |
- Extract `process_interrupts` out of `convert_input`
- Instead of waiting for os_breakcheck/os_inchar calls, call `convert_input`
and `process_interrupts` directly from the read callback in input.c.
- Remove the `settmode` calls from `job_wait`. Now that interrupts are
processed in the event loop, there's no need to set the terminal to cooked
which introduces other problems(ref 7.4.427)
|
|
|
|
|
|
| |
The streams job_close_*() reference have not been initialized by the
time we call uv_spawn() and libuv closes these pipes for us when spawn()
fails.
|
|
|
|
| |
This is required to prevent the scenario explained by @akkartik in #1324
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
It's possible that a child process won't close it's standard streams, even after
it exits. This can be evidenced with the "xclip" program:
:call system('xclip -i -selection clipboard', 'DATA')
Before this commit, the above command wouldn't return, even though the xclip
program had exited. That is because `xclip` wasn't closing it's stdout/stderr
streams, which would block pending_refs from ever reaching 0.
Now the job.c module was refactored to ensure all streams are closed when the
uv_process_t handle is closed.
|
| |
|
|
|
|
|
|
|
| |
A pattern that is becoming common across the project is to poll for events until
a certain condition is true, optionally passing a timeout. To address this
scenario, the event_poll_until macro was created and the job/channel/input
modules were refactored to use it on their blocking functions.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is how asynchronous events are currently handled by Nvim:
- Libuv event loop is entered when Nvim blocks for user input(os_inchar is
called)
- Any event delivered by libuv that is not user input is queued for processing
- The `K_EVENT` special key code is returned by os_inchar
- `K_EVENT` is returned to a loop that is reading keys for the current Nvim
mode, which will be handled by calling event_process()
This approach has the advantage of integrating nicely with the current
codebase, eg: vimscript code can be executed asynchronously with little
surprises(Its the same as if the user typed a key).
The problem with using keys to represent any event is that it also interferes with
operators, and not every event needs or should do that. For example, consider
this scenario:
- A msgpack-rpc client calls vim_feedkeys("d")
- Nvim processes K_EVENT, pushing "d" to the input queue
- Nvim processes "d", entering operator-pending mode to wait for a motion
- The client calls vim_feedkeys("w"), expecting Nvim to delete a word
- Nvim processes K_EVENT, breaking out of operator-pending and pushing "w"
- Nvim processes "w", moving a word
This commit fixes the above problem by removing all automatic calls to
`event_push`(which is what generates K_EVENT input). Right now this also breaks
redrawing initiated by asynchronous events(and possibly other stuff too, Nvim is
a complex state machine and we can't simply run vimscript code anywhere).
In future commits the calls to `event_push` will be inserted only where it's
absolutely necessary to run code in "key reading loops", such as when executing
vimscript code or mutating editor data structures in ways that currently can
only be done by the user.
|
|
|
|
|
|
| |
This approach is more flexible because we don't need to support a fixed set of
"event types", any module can push events to be handled in main loop by simply
passing a callback to the Event structure.
|
|
|
|
|
|
|
|
| |
RBuffer instances represent the internal buffer used by RStreams.
This changes RStream constructor to receive RBuffer pointers and adds a set of
RBuffer methods that expose the lower level buffer manipulation to consumers of
the RStream API.
|
| |
|
|
|
|
|
|
|
|
|
| |
* With the changes in commit
"events: Refactor how event deferral is handled"
(2e4ea29d2c7b62eb8baf1c41cd43433e085dda0) the function argument
'defer' of 'job_start' and member variable 'defer' of 'struct job'
can be removed.
* Update/Fix the documentation for function 'job_start'.
|
|
|
|
|
|
|
|
|
| |
It used to be 1024 bytes, which is very tiny and slows down some operations
(imaging `cat`-ing a large file). Benchmarks show a large speedup for such
cases. ref #978.
For modern systems 0xFFFF bytes (65535 B = 64 KB = 0.0625 MB) per job
shouldn't be a big problem.
|
|
|
|
|
|
| |
- One can now manually close the in-pipe, without having to tear down the
job.
- One can be notified of write success/failure.
|
|
|
|
| |
Used to wait synchronously for a job to end.
|
|
|
|
|
| |
Free the data memory of process and pipe handles in the close callback
for a job.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- Remove all *_set_defer methods and the 'defer' flag from rstream/jobs
- Added {signal,rstream,job}_event_source functions. Each return a pointer that
represent the event source for the object in question(For signals, a static
pointer is returned)
- Added a 'source' field to the Event struct, which is set to the appropriate
value by the code that created the event.
- Added a 'sources' parameter to `event_poll`. It should point to a
NULL-terminated array of event sources that will be used to decide which
events should be processed immediately
- Added a 'source_override' parameter to `rstream_new`. This was required to use
jobs as event sources of RStream instances(When "focusing" on a job, for
example).
- Extracted `process_from` static function from `event_process`.
- Remove 'defer' parameter from `event_process`, which now operates only on
deferred events.
- Refactor `channel_send_call` to use the new lock mechanism
What changed in a single sentence: Code that calls `event_poll` have to specify
which event sources should NOT be deferred. This change was necessary for a
number of reasons:
- To fix a bug where due to race conditions, a client request
could end in the deferred queue in the middle of a `channel_send_call`
invocation, resulting in a deadlock since the client process would never
receive a response, and channel_send_call would never return because
the client would still be waiting for the response.
- To handle "event locking" correctly in recursive `channel_send_call`
invocations when the frames are waiting for responses from different
clients. Not much of an issue now since there's only a python client, but
could break things later.
- To simplify the process of implementing synchronous functions that depend on
asynchronous events.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
`-Wstrict-prototypes` warn if a function is declared or defined without
specifying the argument types.
This warning disallow function prototypes with empty parameter list.
In C, a function declared with an empty parameter list accepts an
arbitrary number of arguments when being called. This is for historic
reasons; originally, C functions didn't have prototypes, as C evolved
from B, a typeless language. When prototypes were added, the original
typeless declarations were left in the language for backwards
compatibility.
Instead we should provide `void` in argument list to state
that function doesn't have arguments.
Also this warning disallow declaring type of the parameters after the
parentheses because Neovim header generator produce no declarations for
old-stlyle prototypes: it expects to find `{` after prototype.
|
|
|
|
| |
The value is forwarded to it's own WStream instance
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To make it possible reuse `event_poll` recursively and in other blocking
function calls, this changes how deferred/immediate events are processed:
- There are two queues in event.c, one for immediate events and another for
deferred events. The queue used when pushing/processing events is determined
with boolean arguments passed to `event_push`/`event_process` respectively.
- Events pushed to the immediate queue are processed inside `event_poll` but
after the `uv_run` call. This is required because libuv event loop does not
support recursion, and processing events may result in other `event_poll`
calls.
- Events pushed to the deferred queue are processed later by calling
`event_process(true)`. This is required to "trick" vim into treating all
asynchronous events as special keypresses, which is the least obtrusive
way of introducing asynchronicity into the editor.
- RStream instances will now forward the `defer` flag to the `event_push` call.
|
|
|
|
| |
This was done to give more control over memory management to job_write callers.
|
|
|
|
|
| |
This is has the same effect as the RStream 'defer' flag, but also works for the
job's exit event.
|
|
|
|
|
| |
'job_start' returns the id as an out paramter, and the 'job_find' function is
now used by eval.c to translate job ids into pointers.
|
|
|
|
|
|
|
|
| |
- Removed 'copy' parameter from `wstream_new_buffer`. Callers simply pass a
copy of the buffer if required.
- Added a callback parameter, which is used to notify callers when the data is
successfully written. The callback is also used to free the buffer(if
required) and is compatible with `free` from the standard library.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- The 'stripdecls.py' script replaces declarations in all headers by includes to
generated headers.
`ag '#\s*if(?!ndef NEOVIM_).*((?!#\s*endif).*\n)*#ifdef INCLUDE_GENERATED'`
was used for this.
- Add and integrate gendeclarations.lua into the build system to generate the
required includes.
- Add -Wno-unused-function
- Made a bunch of old-style definitions ANSI
This adds a requirement: all type and structure definitions must be present
before INCLUDE_GENERATED_DECLARATIONS-protected include.
Warning: mch_expandpath (path.h.generated.h) was moved manually. So far it is
the only exception.
|
|
|
|
| |
Uses a perl script to move it (scripts/movedocs.pl)
|
|
|
|
|
|
|
| |
Now `wstream_write` receives pointers for WBuffer objects(created with
wstream_new_buffer), which stores a reference count to determine when it's safe
the free the buffer. This was done to enable writing of the same buffer to
multiple WStream instances
|