| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
| |
Co-authored-by: Dustin S. <dstackmasta27@gmail.com>
Co-authored-by: Ferenc Fejes <fejes@inf.elte.hu>
Co-authored-by: Maria José Solano <majosolano99@gmail.com>
Co-authored-by: Yochem van Rosmalen <git@yochem.nl>
Co-authored-by: brianhuster <phambinhanctb2004@gmail.com>
Co-authored-by: zeertzjq <zeertzjq@outlook.com>
|
|
|
| |
Result of `make iwyu` (after some "fixups").
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the api_info() output:
:new|put =map(filter(api_info().functions, '!has_key(v:val,''deprecated_since'')'), 'v:val')
...
{'return_type': 'ArrayOf(Integer, 2)', 'name': 'nvim_win_get_position', 'method': v:true, 'parameters': [['Window', 'window']], 'since': 1}
The `ArrayOf(Integer, 2)` return type didn't break clients when we added
it, which is evidence that clients don't use the `return_type` field,
thus renaming Dictionary => Dict in api_info() is not (in practice)
a breaking change.
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
`nvim --listen` does not error on EADDRINUSE. #30123
Solution:
Now that `$NVIM_LISTEN_ADDRESS` is deprecated and input *only* (instead
of the old, ambiguous situation where it was both an input *and* an
output), we can be fail fast instead of trying to "recover". This
reverts the "recovery" behavior of
704ba4151e7f67999510ee0ac19fdabb595d530c, but that was basically
a workaround for the fragility of `$NVIM_LISTEN_ADDRESS`.
|
|
|
|
|
|
|
|
|
| |
Use the grapheme break algorithm from utf8proc to support grapheme
clusters from recent unicode versions.
Handle variant selector VS16 turning some codepoints into double-width
emoji. This means we need to use ptr2cells rather than char2cells when
possible.
|
|
|
|
|
|
|
| |
This also makes shada reading slightly faster due to avoiding
some copying and allocation.
Use keysets to drive decoding of msgpack maps for shada entries.
|
| |
|
|
|
|
|
|
| |
This particular repro is quite niche but there could be other cases,
whenever the the second last cell plus the "fill" cell togheter are too
complex
|
|
|
|
|
|
|
|
| |
Problem: Calling :redraw from vim.ui_attach() callback results in
recursive cmdline/message events.
Solution: Avoid recursiveness where possible and replace global "call_buf"
with separate, temporary buffers for each event so that when a Lua
callback for one event fires another event, that does not result
in invalid memory access.
|
|
|
|
|
| |
ui_flush_buf() doesn't know about `lenpos` so `remote_ui_raw_line`
needs to always handle it before flushing
|
|
|
|
|
|
| |
- Tags are now created with `[tag]()`
- References are now created with `[tag]`
- Code spans are no longer wrapped
|
|
|
|
|
|
|
|
| |
Just some basic spring cleaning.
In the distant past, not all UI:s where remote UI:s. They still aren't,
but both of the "UI" and "UIData" structs are now only for remote UI:s.
Thus join them as "RemoteUI".
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, we needed to always pack an entire msgpack_rpc Object to
a continous memory buffer before sending it out to a channel.
But this is generally wasteful. it is better to just flush
whatever is in the buffer and then continue packing to a new buffer.
This is also done for the UI event packer where there are some extra logic
to "finish" of an existing batch of nevents/ncalls. This doesn't really
stop us from flushing the buffer, just that we need to update the state
machine accordingly so the next call to prepare_call() always will
start with a new event (even though the buffer might contain overflow
data from a large event).
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Problem:
The documentation flow (`gen_vimdoc.py`) has several issues:
- it's not very versatile
- depends on doxygen
- doesn't work well with Lua code as it requires an awkward filter script to convert it into pseudo-C.
- The intermediate XML files and filters makes it too much like a rube goldberg machine.
Solution:
Re-implement the flow using Lua, LPEG and treesitter.
- `gen_vimdoc.py` is now replaced with `gen_vimdoc.lua` and replicates a portion of the logic.
- `lua2dox.lua` is gone!
- No more XML files.
- Doxygen is now longer used and instead we now use:
- LPEG for comment parsing (see `scripts/luacats_grammar.lua` and `scripts/cdoc_grammar.lua`).
- LPEG for C parsing (see `scripts/cdoc_parser.lua`)
- Lua patterns for Lua parsing (see `scripts/luacats_parser.lua`).
- Treesitter for Markdown parsing (see `scripts/text_utils.lua`).
- The generated `runtime/doc/*.mpack` files have been removed.
- `scripts/gen_eval_files.lua` now instead uses `scripts/cdoc_parser.lua` directly.
- Text wrapping is implemented in `scripts/text_utils.lua` and appears to produce more consistent results (the main contributer to the diff of this change).
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note: this contains two _temporary_ changes which can be reverted
once the Arena vs no-Arena distinction in API wrappers has been removed.
Both nlua_push_Object and object_to_vim_take_luaref() has been changed
to take the object argument as a pointer. This is not going to be
necessary once these are only used with arena (or not at all) allocated
Objects.
The object_to_vim() variant which leaves luaref untouched might need to
stay for a little longer.
|
|\
| |
| | |
fix: crashes with large msgpack messages
|
| |
| |
| |
| |
| |
| |
| |
| |
| | |
Determine the needed buffer space first, instead of trying to revert the
effect of prepare_call if message does not fit. The previous code did
not revert the full state, which caused corrupted messages to be sent.
So, rather than trying to fix all of that, with fragile and hard to read
code as a result, the code is now much more simple, although slightly
slower.
|
| | |
|
|/
|
|
|
| |
In the context a String inside an Object/Dictionary etc is consumed,
it is considered to be read-only.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Extmarks can contain URLs which can then be drawn in any supporting UI.
In the TUI, for example, URLs are "drawn" by emitting the OSC 8 control
sequence to the TTY. On terminals which support the OSC 8 sequence this
will create clickable hyperlinks.
URLs are treated as inline highlights in the decoration subsystem, so
are included in the `DecorSignHighlight` structure. However, unlike
other inline highlights they use allocated memory which must be freed,
so they set the `ext` flag in `DecorInline` so that their lifetimes are
managed along with other allocated memory like virtual text.
The decoration subsystem then adds the URLs as a new highlight
attribute. The highlight subsystem maintains a set of unique URLs to
avoid duplicating allocations for the same string. To attach a URL to an
existing highlight attribute we call `hl_add_url` which finds the URL in
the set (allocating and adding it if it does not exist) and sets the
`url` highlight attribute to the index of the URL in the set (using an
index helps keep the size of the `HlAttrs` struct small).
This has the potential to lead to an increase in highlight attributes
if a URL is used over a range that contains many different highlight
attributes, because now each existing attribute must be combined with
the URL. In practice, however, URLs typically span a range containing a
single highlight (e.g. link text in Markdown), so this is likely just a
pathological edge case.
When a new highlight attribute is defined with a URL it is copied to all
attached UIs with the `hl_attr_define` UI event. The TUI manages its own
set of URLs (just like the highlight subsystem) to minimize allocations.
The TUI keeps track of which URL is "active" for the cell it is
printing. If no URL is active and a cell containing a URL is printed,
the opening OSC 8 sequence is emitted and that URL becomes the actively
tracked URL. If the cursor is moved while in the middle of a URL span,
we emit the terminating OSC sequence to prevent the hyperlink from
spanning multiple lines.
This does not support nested hyperlinks, but that is a rare (and,
frankly, bizarre) use case. If a valid use case for nested hyperlinks
ever presents itself we can address that issue then.
|
|
|
|
|
|
| |
Remove `export` pramgas from defs headers as it causes IWYU to believe
that the definitions from the defs headers comes from main header, which
is not what we really want.
|
|
|
|
|
|
|
|
|
|
|
| |
A bit big, but practically it was a lot simpler to change over all
fillchars and all listchars at once, to not need to maintain two
parallel implementations.
This is mostly an internal refactor, but it also removes an arbitrary
limitation: that 'fillchars' and 'listchars' values can only be
single-codepoint characters. Now any character which fits into a single
screen cell can be used.
|
|
|
|
| |
Reference: https://github.com/neovim/neovim/issues/6371.
|
| |
|
|
|
| |
Discovered using __sanitizer_print_memory_profile().
|
|
|
|
|
|
| |
FUNC_ATTR_* should only be used in .c files with generated headers.
Defining FUNC_ATTR_* as empty in headers causes misuses of them to be
silently ignored. Instead don't define them by default, and only define
them as empty after a .c file has included its generated header.
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Problem:
Platform-specific UI providers should live in `vim.ui.*`. #24164
Solution:
- Move `vim.clipboard.osc52` module to `vim.ui.clipboard.osc52`.
- TODO: move all of `clipboard.vim` to `vim.ui.clipboard`.
ref #25872
|
| |
|
| |
|
|
|
|
|
|
|
| |
We already have an extensive suite of static analysis tools we use,
which causes a fair bit of redundancy as we get duplicate warnings. PVS
is also prone to give false warnings which creates a lot of work to
identify and disable.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
When the terminal emulator sends an OSC sequence to Nvim (as a response
to another OSC sequence that was first sent by Nvim), populate the OSC
sequence in the v:termresponse variable and fire the TermResponse event.
The escape sequence is also included in the "data" field of the
autocommand callback when the autocommand is defined in Lua.
This makes use of the already documented but unimplemented TermResponse
event. This event exists in Vim but is only fired when Vim receives a
primary device attributes response.
Fixes: https://github.com/neovim/neovim/issues/25856
|
|
|
|
|
|
| |
connection from any channel or stdio will unblock
remote_ui_wait_for_attach. Wait on stdio only if
only —embed specified, if both —embed and
—listen then wait on any channel.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, a screen cell would occupy 28+4=32 bytes per cell
as we always made space for up to MAX_MCO+1 codepoints in a cell.
As an example, even a pretty modest 50*80 screen would consume
50*80*2*32 = 256000, i e a quarter megabyte
With the factor of two due to the TUI side buffer, and even more when
using msg_grid and/or ext_multigrid.
This instead stores a 4-byte union of either:
- a valid UTF-8 sequence up to 4 bytes
- an escape char which is invalid UTF-8 (0xFF) plus a 24-bit index to a
glyph cache
This avoids allocating space for huge composed glyphs _upfront_, while
still keeping rendering such glyphs reasonably fast (1 hash table lookup
+ one plain index lookup). If the same large glyphs are using repeatedly
on the screen, this is still a net reduction of memory/cache
consumption. The only case which really gets worse is if you blast
the screen full with crazy emojis and zalgo text and even this case
only leads to 4 extra bytes per char.
When only <= 4-byte glyphs are used, plus the 4-byte attribute code,
i e 8 bytes in total there is a factor of four reduction of memory use.
Memory which will be quite hot in cache as the screen buffer is scanned
over in win_line() buffer text drawing
A slight complication is that the representation depends on host byte
order. I've tested this manually by compling and running this
in qemu-s390x and it works fine. We might add a qemu based solution
to CI at some point.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This involves two redesigns of the map.c implementations:
1. Change of macro style and code organization
The old khash.h and map.c implementation used huge #define blocks with a
lot of backslash line continuations.
This instead uses the "implementation file" .c.h pattern. Such a file is
meant to be included multiple times, with different macros set prior to
inclusion as parameters. we already use this pattern e.g. for
eval/typval_encode.c.h to implement different typval encoders reusing a
similar structure.
We can structure this code into two parts. one that only depends on key
type and is enough to implement sets, and one which depends on both key
and value to implement maps (as a wrapper around sets, with an added
value[] array)
2. Separate the main hash buckets from the key / value arrays
Change the hack buckets to only contain an index into separate key /
value arrays
This is a common pattern in modern, state of the art hashmap
implementations. Even though this leads to one more allocated array, it
is this often is a net reduction of memory consumption. Consider
key+value consuming at least 12 bytes per pair. On average, we will have
twice as many buckets per item.
Thus old implementation:
2*12 = 24 bytes per item
New implementation
1*12 + 2*4 = 20 bytes per item
And the difference gets bigger with larger items.
One might think we have pulled a fast one here, as wouldn't the average size of
the new key/value arrays be 1.5 slots per items due to amortized grows?
But remember, these arrays are fully dense, and thus the accessed memory,
measured in _cache lines_, the unit which actually matters, will be the
fully used memory but just rounded up to the nearest cache line
boundary.
This has some other interesting properties, such as an insert-only
set/map will be fully ordered by insert only. Preserving this ordering
in face of deletions is more tricky tho. As we currently don't use
ordered maps, the "delete" operation maintains compactness of the item
arrays in the simplest way by breaking the ordering. It would be
possible to implement an order-preserving delete although at some cost,
like allowing the items array to become non-dense until the next rehash.
Finally, in face of these two major changes, all code used in khash.h
has been integrated into map.c and friends. Given the heavy edits it
makes no sense to "layer" the code into a vendored and a wrapper part.
Rather, the layered cake follows the specialization depth: code shared
for all maps, code specialized to a key type (and its equivalence
relation), and finally code specialized to value+key type.
|
|
|
| |
Co-authored-by: bfredl <bjorn.linse@gmail.com>
|
| |
|
|
|
|
|
|
| |
This fixes the TUI's line-wrapping behavior, which was broken with the
migration to the msgpack-based UI protocol (see
https://github.com/neovim/neovim/issues/7369#issuecomment-1571812273).
|
|
|
|
| |
Adds new API helper macros `CSTR_AS_OBJ()`, `STATIC_CSTR_AS_OBJ()`, and `STATIC_CSTR_TO_OBJ()`, which cleans up a lot of the current code. These macros will also be used extensively in the upcoming option refactor PRs because then API Objects will be used to get/set options. This PR also modifies pre-existing code to use old API helper macros like `CSTR_TO_OBJ()` to make them cleaner.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This reduces the total number of khash_t instantiations from 22 to 8.
Make the khash internal functions take the size of values as a runtime
parameter. This is abstracted with typesafe Map containers which
are still specialized for both key, value type.
Introduce `Set(key)` type for when there is no value.
Refactor shada.c to use Map/Set instead of khash directly.
This requires `map_ref` operation to be more flexible.
Return pointers to both key and value, plus an indicator for new_item.
As a bonus, `map_key` is now redundant.
Instead of Map(cstr_t, FileMarks), use a pointer map as the FileMarks struct is
humongous.
Make `event_strings` actually work like an intern pool instead of wtf it
was doing before.
|
| |
|