| Commit message (Collapse) | Author | Age |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the api_info() output:
:new|put =map(filter(api_info().functions, '!has_key(v:val,''deprecated_since'')'), 'v:val')
...
{'return_type': 'ArrayOf(Integer, 2)', 'name': 'nvim_win_get_position', 'method': v:true, 'parameters': [['Window', 'window']], 'since': 1}
The `ArrayOf(Integer, 2)` return type didn't break clients when we added
it, which is evidence that clients don't use the `return_type` field,
thus renaming Dictionary => Dict in api_info() is not (in practice)
a breaking change.
|
| |
|
|
|
|
|
|
|
| |
This also makes shada reading slightly faster due to avoiding
some copying and allocation.
Use keysets to drive decoding of msgpack maps for shada entries.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Before, we needed to always pack an entire msgpack_rpc Object to
a continous memory buffer before sending it out to a channel.
But this is generally wasteful. it is better to just flush
whatever is in the buffer and then continue packing to a new buffer.
This is also done for the UI event packer where there are some extra logic
to "finish" of an existing batch of nevents/ncalls. This doesn't really
stop us from flushing the buffer, just that we need to update the state
machine accordingly so the next call to prepare_call() always will
start with a new event (even though the buffer might contain overflow
data from a large event).
|
|
|
|
|
|
| |
Note: kSDItemHeader is something is _written_ by nvim in the shada file
to identify it for debugging purposes outside of nvim. But this data wasn't ever used by
neovim after reading the file back, So I removed the parsing of it for now.
|
| |
|
|
|
|
|
|
|
| |
We already have an extensive suite of static analysis tools we use,
which causes a fair bit of redundancy as we get duplicate warnings. PVS
is also prone to give false warnings which creates a lot of work to
identify and disable.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
refactor: use a more idiomatic loop to iterate over the cells
There are two cases in which the following assertion would fail:
```c
assert(g->icell < g->ncells);
```
1. If `g->ncells = 0`. Update this to be legal.
2. If an EOF is reached while parsing `wrap`. In this case, the unpacker
attempts to resume from `cells`, which is a bug. Create a new state
for parsing `wrap`.
Reference: https://neovim.io/doc/user/ui.html#ui-event-grid_line
|
|
|
|
|
|
| |
Problem: The style guide states that all switch statements that are not conditional on an enum must have a `default` case, but does not give any explicit guideline for switch statements that are conditional on enums. As a result, a `default` case is added in many enum switch statements, even when the switch statement is exhaustive. This is not ideal because it removes the ability to have compiler errors to easily detect unchanged switch statements when a new possible value for an enum is added.
Solution: Add explicit guidelines for switch statements that are conditional on an enum, clarifying that a `default` case is not necessary if the switch statement is exhaustive. Also refactor pre-existing code with unnecessary `default` cases.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Previously, a screen cell would occupy 28+4=32 bytes per cell
as we always made space for up to MAX_MCO+1 codepoints in a cell.
As an example, even a pretty modest 50*80 screen would consume
50*80*2*32 = 256000, i e a quarter megabyte
With the factor of two due to the TUI side buffer, and even more when
using msg_grid and/or ext_multigrid.
This instead stores a 4-byte union of either:
- a valid UTF-8 sequence up to 4 bytes
- an escape char which is invalid UTF-8 (0xFF) plus a 24-bit index to a
glyph cache
This avoids allocating space for huge composed glyphs _upfront_, while
still keeping rendering such glyphs reasonably fast (1 hash table lookup
+ one plain index lookup). If the same large glyphs are using repeatedly
on the screen, this is still a net reduction of memory/cache
consumption. The only case which really gets worse is if you blast
the screen full with crazy emojis and zalgo text and even this case
only leads to 4 extra bytes per char.
When only <= 4-byte glyphs are used, plus the 4-byte attribute code,
i e 8 bytes in total there is a factor of four reduction of memory use.
Memory which will be quite hot in cache as the screen buffer is scanned
over in win_line() buffer text drawing
A slight complication is that the representation depends on host byte
order. I've tested this manually by compling and running this
in qemu-s390x and it works fine. We might add a qemu based solution
to CI at some point.
|
|\
| |
| | |
fix(ui): propagate line flags on grid_line events
|
| |
| |
| |
| |
| |
| | |
This fixes the TUI's line-wrapping behavior, which was broken with the
migration to the msgpack-based UI protocol (see
https://github.com/neovim/neovim/issues/7369#issuecomment-1571812273).
|
|/
|
| |
Remove redundant `BOOL` macro that does the same thing as `BOOLEAN_OBJ`.
|
| |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
* fix: log and clear error in ui_comp_event
* fix: handling error in each map_foreach_value iteration
* fix: handling error decl in for_each loop
* fix: updating initerr to const, removing initerr free-ing
* fix: using ERROR_SET for error check
* fix: wrapping ERROR_INIT in parens to allow for including inside macro
|
|
|
|
|
|
|
|
|
|
|
| |
* fix(PVS/V009): start file with special comment
* fix(PVS/V501): identical sub-expressions for comparison
* fix(PVS/V560): part of conditional expression is always true/false
* fix(PVS/V593): review expression of type A = B < C
* fix(PVS/V614): potentially uninitialized variable used
|
|
|
|
|
|
|
|
|
|
| |
Allow Include What You Use to remove unnecessary includes and only
include what is necessary. This helps with reducing compilation times
and makes it easier to visualise which dependencies are actually
required.
Work on https://github.com/neovim/neovim/issues/549, but doesn't close
it since this only works fully for .c files and not headers.
|
|
|
|
|
| |
This is both simpler in client code and more effective (always reuse
block hottest in cache)
|
| |
|
| |
|
|
|
|
|
| |
This reduces the memory overhead for large redraw batches, as a much smaller
prefix of the api object buffer is used and needs to be hot in cache.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Note for external UIs: Nvim can now emit multiple "redraw" event batches
before a final "flush" event is received. To retain existing behavior,
clients should make sure to update visible state at an explicit "flush"
event, not just the end of a "redraw" batch of event.
* Get rid of copy_object() blizzard in the auto-generated ui_event layer
* Special case "grid_line" by encoding screen state directly to
msgpack events with no intermediate API events.
* Get rid of the arcane notion of referring to the screen as the "shell"
* Array and Dictionary are kvec_t:s, so define them as such.
* Allow kvec_t:s, such as Arrays and Dictionaries, to be allocated with
a predetermined size within an arena.
* Eliminate redundant capacity checking when filling such kvec_t:s
with values.
|
|
|
|
|
|
|
|
|
| |
drawback: tracing memory errors with ASAN is less accurate for arena
allocated memory.
Therefore, to start with it is being used for Object types around
serialization/deserialization exclusively. This is going to have
a large impact especially when TUI is refactored as a co-prosess
as all UI events will be serialized and deserialized by nvim itself.
|
| |
|
|
Currently this is more or less a straight off reimplementation,
but this allow further optimizations down the line, especially
for avoiding memory allocations of rpc objects.
Current score for "make functionaltest; make oldtest" on a -DEXITFREE build:
is 117 055 352 xfree(ptr != NULL) calls (that's NUMBERWANG!).
|