Description
The current wording of [exec.when.all]
defines the impls-for<when_all_t>::get-env
implementation as:
[]<class State, class Rcvr>(auto&&, State& state, const Receiver& rcvr) noexcept { return JOIN-ENV( MAKE-ENV(get_stop_token, state.stop_src.get_token()), get_env(rcvr)); }
However, this formulation forces the eager evaluation of the get_stop_token
query, computing the stop-token which then needs to be curried and stored in the returned environment.
If we imagine many such algorithms using this formulation that are composed (e.g. multiple nested when_all() senders) then we can easily end up with the environment returned from the leaf operation's receiver could include many such stop-tokens, each copied into the final environment.
Instead, what I think we want to do is define an impls-for<Cpo>::query-env(query, state, parent_rcvr)
operation rather than get-env
. Then we change get_env
implementation on basic-receiver
to instead return a basic-env
that holds a basic-operation* op_;
member and then forwards queries to return impls-for<Cpo>::query-env(query, op_->state_, op_->rcvr_);
This would allow environment queries to be computed lazily instead of eagerly computing the values for environment queries.
The definition of impls-for<when_all_t>::query-env
would then be:
[]<class Query, class State, class Rcvr>(Query query, State& state, const Receiver& rcvr) noexcept -> decltype(auto) requires std::same_as<Query, get_stop_token_t> || std::tag_invocable<Query, const env_of_t<Rcvr>&> if constexpr (std::same_as<Query, get_stop_token_t>) { return state.stop_src.get_token(); } else { return tag_invoke(Query, as_const(get_env(rcvr))); } }