vllm.v1.attention.backends.utils ¶
PerLayerParameters dataclass ¶
Currently, FlashInfer backend only support models in which all layers share the same values for the following hyperparameters. Should not be used for trtllm-gen backend since it supports different values for the following hyperparameters.
Source code in vllm/v1/attention/backends/utils.py
get_dcp_local_seq_lens ¶
get_dcp_local_seq_lens(
seq_lens: Tensor,
dcp_size: int = 1,
dcp_rank: int | None = None,
cp_kv_cache_interleave_size: int = 1,
) -> Tensor
While using dcp, kv_cache size stored on each rank may be different, use this function to calculate split decode seq_lens of each dcp rank. Only consider dcp now, we can extend the case of cp based on this.
Source code in vllm/v1/attention/backends/utils.py
get_per_layer_parameters ¶
get_per_layer_parameters(
vllm_config: VllmConfig,
layer_names: list[str],
cls_: type[AttentionImpl],
) -> dict[str, PerLayerParameters]
Scan layers in layer_names and determine some hyperparameters to use during plan.
Source code in vllm/v1/attention/backends/utils.py
infer_global_hyperparameters ¶
infer_global_hyperparameters(
per_layer_params: dict[str, PerLayerParameters],
) -> PerLayerParameters
Currently, FlashInfer backend other than trtllm-gen only support models in which all layers share the same values for the following hyperparameters: - window_left - logits_soft_cap - sm_scale
So this function asserts that all layers share the same values for these hyperparameters and returns the global values.
Source code in vllm/v1/attention/backends/utils.py
mamba_get_block_table_tensor ¶
mamba_get_block_table_tensor(
block_table: Tensor,
seq_lens: Tensor,
kv_cache_spec: KVCacheSpec,
mamba_cache_mode: str,
) -> Tensor
Get the block table tensor for mamba kernels from the input common_attn_metadata.block_table_tensor given different mamba cache modes.
-
"all": input (#requests, cdiv(max_model_len, block_size)); output (#requests, cdiv(max_model_len, block_size)).
-
"none": input (#requests, 1 + num_speculative_blocks); output (#requests, 1 + num_speculative_blocks).
-
"align": input (#requests, cdiv(max_model_len, block_size)); output (#requests, 1 + num_speculative_blocks), which are the last 1 + num_speculative_blocks of each request.
Source code in vllm/v1/attention/backends/utils.py
reorder_batch_to_split_decodes_and_prefills ¶
reorder_batch_to_split_decodes_and_prefills(
input_batch: InputBatch,
scheduler_output: SchedulerOutput,
decode_threshold: int = 1,
) -> bool
Reorders the batch to split into prefill and decode requests; places all requests with <= decode_threshold tokens at the front of the batch.
Returns:
| Type | Description |
|---|---|
bool | True if the batch was modified, False otherwise. |
Source code in vllm/v1/attention/backends/utils.py
reshape_attn_output_for_spec_decode ¶
Reshapes the attention output tensor, so that the batch_size and seq_len dimensions are combined.
Source code in vllm/v1/attention/backends/utils.py
reshape_query_for_spec_decode ¶
Reshapes the query tensor for the specified batch size, so that it has shape (batch_size, seq_len, num_heads, head_dim).
Source code in vllm/v1/attention/backends/utils.py
split_decodes_and_prefills ¶
split_decodes_and_prefills(
common_attn_metadata: CommonAttentionMetadata,
decode_threshold: int = 1,
require_uniform: bool = False,
) -> tuple[int, int, int, int]
Assuming a reordered batch, finds the boundary between prefill and decode requests.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
common_attn_metadata | CommonAttentionMetadata | CommonAttentionMetadata object containing the batch metadata. | required |
decode_threshold | int | The maximum query length to be considered a decode. | 1 |
require_uniform | bool | If True, requires that all decode requests have the same query length. When set, some queries may be considered prefills even if they are <= decode_threshold, in order to ensure uniformity. | False |
Returns:
| Name | Type | Description |
|---|---|---|
num_decodes | int | The number of decode requests. |
num_prefills | int | The number of prefill requests. |
num_decode_tokens | int | The number of tokens in the decode requests. |
num_prefill_tokens | int | The number of tokens in the prefill requests. |
Source code in vllm/v1/attention/backends/utils.py
split_decodes_prefills_and_extends ¶
split_decodes_prefills_and_extends(
common_attn_metadata: CommonAttentionMetadata,
decode_threshold: int = 1,
) -> tuple[int, int, int, int, int, int]
Assuming a reordered batch, finds the boundary between prefill and decode requests.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
common_attn_metadata | CommonAttentionMetadata | CommonAttentionMetadata object containing the batch metadata. | required |
decode_threshold | int | The maximum query length to be considered a decode. | 1 |
Returns:
| Name | Type | Description |
|---|---|---|
num_decodes | int | The number of decode requests. |
num_extends | int | The number of extend requests. |
num_prefills | int | The number of prefill requests. |
num_decode_tokens | int | The number of tokens in the decode requests. |
num_extend_tokens | int | The number of tokens in the extend requests. |
num_prefill_tokens | int | The number of tokens in the prefill requests. |
Source code in vllm/v1/attention/backends/utils.py
split_prefill_chunks ¶
split_prefill_chunks(
seq_lens_cpu: Tensor,
workspace_size: int,
request_offset: int = 0,
) -> list[tuple[int, int]]
Split the prefill requests into chunks such that the total sequence length of each chunk is less than or equal to the workspace size.
Parameters:
| Name | Type | Description | Default |
|---|---|---|---|
seq_lens_cpu | Tensor | The sequence lengths of the prefill requests on CPU. | required |
workspace_size | int | The maximum workspace size (in tokens) per chunk. | required |
request_offset | int | The offset to add to the request indices. | 0 |
Returns: A list of tuples of (reqs_start, reqs_end) representing chunk boundaries.
Source code in vllm/v1/attention/backends/utils.py
subclass_attention_metadata ¶
subclass_attention_metadata(
name_prefix: str,
metadata_cls: Any,
fields: list[tuple[str, Any, Any]],
) -> Any
Return a new subclass of metadata_cls with additional fields