| Commit message (Collapse) | Author | Age |
... | |
|
|
|
|
|
|
|
| |
The old mch_libcall was removed from neovim. This is a partial
reimplementation on top of libuv. It doesn't catch exceptions (windows) nor
signals (unix) though, so it's quite a bit more prone to crashing if the
loadable library throws an exception or crashes. Still, it should be fine
for well-behaved libraries. Requested by @Shougo.
|
| |
|
|
|
|
|
|
|
| |
`FileID` should encapsulate `st_dev` and `st_ino`. It is a new abstraction
used to check if two files are the same. `FileID`s will be embeded inside
other struts like `buf_t` or `ff_visited_T`, where a full `FileInfo` would be
to big.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is how API dispatching worked before this commit:
- The generated `msgpack_rpc_dispatch` function receives a the `msgpack_packer`
argument.
- The response is incrementally built while validating/calling the API.
- Return values/errors are also packed into the `msgpack_packer` while the
final response is being calculated.
Now the `msgpack_packer` argument is no longer provided, and the
`msgpack_rpc_dispatch` function returns `Object`/`Error` values to
`msgpack_rpc_call`, which will use those values to build the response in a
single pass.
This was done because the new `channel_send_call` function created the
possibility of having recursive API invocations, and this wasn't possible when
sharing a single `msgpack_sbuffer` across call frames(it was shared implicitly
through the `msgpack_packer` instance).
Since we only start to build the response when the necessary information has
been computed, it's now safe to share a single `msgpack_sbuffer` instance
across all channels and API invocations.
Some other changes also had to be performed:
- Handling of the metadata discover was moved to `msgpack_rpc_call`
- Expose more types as subtypes of `Object`, this was required to forward the
return value from `msgpack_rpc_dispatch` to `msgpack_rpc_call`
- Added more helper macros for casting API types to `Object`
any
|
|
|
|
|
| |
Move validation/conversion functions and to msgpack_rpc_helpers to separate
those from the functions that are used from the channel module
|
|
|
|
|
|
|
|
| |
This function is used to send RPC calls to clients. In contrast to
`channel_send_event`, this function will block until the client sends a
response(But it will continue processing requests from that client).
The RPC call stack has a maximum depth of 20.
|
|
|
|
|
|
|
|
|
|
| |
- Generalize some argument names(event type -> event name,
event data -> event arg)
- Rename serialize_event to serialize_message
- Rename msgpack_rpc_notification to msgpack_rpc_message
- Extract the message type out of msgpack_rpc_message
- Add 'id' parameter to msgpack_rpc_message/serialize_message to create messages
that are not notifications
|
| |
|
| |
|
|
|
|
| |
The value is forwarded to it's own WStream instance
|
| |
|
| |
|
|
|
|
|
| |
- Extract code to release WBuffer instances into `release_wbuffer`
- Fix memory leak when wstream_write returns false
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This was done to generalize the usage of `event_poll`, which will now return
`true` only if a event has been processed/deferred before the timeout(if not
-1).
To do that, the `input_ready` calls have been extracted to the input.c
module(the `event_poll` call has been surrounded by `input_ready` calls,
resulting in the same behavior).
The `input_start`/`input_stop` calls still present in `event_poll` are
temporary: When the API becomes the only way to read user input, it will no
longer be necessary to start/stop the input stream.
|
|
|
|
|
| |
The loop condition was set to only exit when user input is processed, but we
must exit on any event to properly notify `event_poll` callers
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
To make it possible reuse `event_poll` recursively and in other blocking
function calls, this changes how deferred/immediate events are processed:
- There are two queues in event.c, one for immediate events and another for
deferred events. The queue used when pushing/processing events is determined
with boolean arguments passed to `event_push`/`event_process` respectively.
- Events pushed to the immediate queue are processed inside `event_poll` but
after the `uv_run` call. This is required because libuv event loop does not
support recursion, and processing events may result in other `event_poll`
calls.
- Events pushed to the deferred queue are processed later by calling
`event_process(true)`. This is required to "trick" vim into treating all
asynchronous events as special keypresses, which is the least obtrusive
way of introducing asynchronicity into the editor.
- RStream instances will now forward the `defer` flag to the `event_push` call.
|
| |
|
|
|
|
|
|
| |
These functions will never be called directly by the user so bugs are the only
reason for passing invalid channel ids. Instead of returning silently we abort
to improve bug detection.
|
|
|
|
| |
This was done to give more control over memory management to job_write callers.
|
|
|
|
|
| |
This is has the same effect as the RStream 'defer' flag, but also works for the
job's exit event.
|
|
|
|
|
| |
'job_start' returns the id as an out paramter, and the 'job_find' function is
now used by eval.c to translate job ids into pointers.
|
|
|
|
|
| |
This function will be used to temporarily change the `defer` flag on rstream
instances.
|
|
|
|
|
| |
The name `async` was not appropriate to describe the behavior enabled by the
flag.
|
|
|
|
|
|
|
|
| |
- Removed 'copy' parameter from `wstream_new_buffer`. Callers simply pass a
copy of the buffer if required.
- Added a callback parameter, which is used to notify callers when the data is
successfully written. The callback is also used to free the buffer(if
required) and is compatible with `free` from the standard library.
|
|
|
|
|
|
|
|
|
|
|
| |
Before this change, any write that could cause a WStream instance to use more
than `maxmem` would fail, which is not acceptable when writing big chunks of
data. (This could happen when returning contents from a big buffer through the
API, for example).
Writes of any size are now allowed, but before we check if the currently used
memory doesn't break the limit. This should be enough to prevent us from
stacking data when talking to a locked process.
|
|
|
|
|
|
|
|
|
| |
There seems to be no way to deal with failures when calling
`msgpack_unpacker_next`, so this reimplements that function as
`msgpack_rpc_unpack`, which has an additional result for detecting failures.
On top of that, we make use of the new function to properly return msgpack-rpc
errors when something bad happens.
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
| |
It's a 1-byte loss of memory but it allows us to skip copying and
NULL-terminating strings when interacting with vim functions that accept C
strings. This lowers the pressure on the allocator and saves lines of code
(no more dup/free pairs).
|
| |
|
| |
|
| |
|
|
|
|
| |
So that they do the last nvim/func_attr.h include
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
- The 'stripdecls.py' script replaces declarations in all headers by includes to
generated headers.
`ag '#\s*if(?!ndef NEOVIM_).*((?!#\s*endif).*\n)*#ifdef INCLUDE_GENERATED'`
was used for this.
- Add and integrate gendeclarations.lua into the build system to generate the
required includes.
- Add -Wno-unused-function
- Made a bunch of old-style definitions ANSI
This adds a requirement: all type and structure definitions must be present
before INCLUDE_GENERATED_DECLARATIONS-protected include.
Warning: mch_expandpath (path.h.generated.h) was moved manually. So far it is
the only exception.
|
|
|
|
| |
Uses a perl script to move it (scripts/movedocs.pl)
|
|
|
|
|
|
| |
Occurs when compiling with:
rm -rf build/ && make clean && make cmake CFLAGS='-DNDEBUG' && make
^--important
|
|
|
|
|
| |
To replace `Map(T)`, a new macro `PMap(T)` was defined as `Map(T, ptr_t)` for
writing maps that store pointers with less boilerplate
|
| |
|
| |
|
| |
|
|
|
|
|
| |
The channel_send_event will now broadcast events to all subscribed channels if
the 'id' parameter is 0.
|
|
|
|
|
|
|
| |
Now `wstream_write` receives pointers for WBuffer objects(created with
wstream_new_buffer), which stores a reference count to determine when it's safe
the free the buffer. This was done to enable writing of the same buffer to
multiple WStream instances
|
| |
|
|
|
|
|
| |
This removes the boilerplate code supporting more than one RPC protocol as it
was becoming hard to maintain and we probably won't ever need it.
|
| |
|
| |
|
|
|
|
|
| |
This function can be used to send arbitrary objects via the API channel back to
connected clients, identified by channel id.
|
|
|
|
|
| |
This refactors msgapck_rpc_{dipatch,call} to receive the channel id as
argument. Now the discovery request returns the [id, metadata] array.
|