Description
I'll describe the problem with an example. I'm using a bit older version of bon
where the generated code was a bit different to better demonstrate the problem:
[dependencies]
bon = "=3.4.0"
On the surface - things work fine. The function param
in code below is assigned the "parameter" semantic token type, which is what I would expect it to always have.

However, if I add another layer of macro expansion, for example, a #[tracing::instrument]
, then highlighting breaks:

The difference between these two cases is that in the first case, the macro outputs this (simplified):
// There is no intermediate macro here, and this usage appears first, so "parameter" token type "wins".
fn __orig_foo(param: u32) {}
// `SetParam` is assigned the span of `param` from the input
struct SetParam;
// A bunch of other code...
in the second case it is this:
// Now, there is an intermediate `#[tracing::instrument]` macro that RA needs to recurse into,
// but it doesn't do that for the purposes of semantic token type inference, and thus RA goes
// to the next usage at `struct SetParam`, assigning the token type "struct" to `param` =(
#[tracing::instrument]
fn __orig_foo(param: u32) {}
// `SetParam` is assigned the span of `param` from the input
struct SetParam;
// A bunch of other code...
Unfortunately, there isn't any other usage of param
in the parameter position in my macro output that I could reorder before the struct
. The only half-fix I can do is to reorder the code where param
is used in a variable declaration as let param = ...
in one of the method's body so it at least gets assigned the "variable" token type (which I did in elastio/bon#275). But... It should be assigned the "parameter" token type instead.
Do you guys know of a good way to fix this in my case? I could technically try to generate some disabled dummy code under #[cfg(rust_analyzer)]
where there are no intermediate macros to make RA assign the "parameter" token type. But, this is such a hack =(
That's why I created this issue. I think we should consider changing the logic in RA so that it does recurse into macros to determine the semantic token type. And then macro authors can always be sure that the first usage of the span always wins in the semantic token type inference battle no matter how nested it is in the macro expansion fixed point iteration.
References
- My old issue about syntax highlighting guarantees: Breaking change in highlighting of code under a proc-macro #18438