LLM Cost dashboard


Large language models (LLMs) are a key component of AI-powered applications. Therefore, understanding the costs associated with their usage becomes important, as it allows you to optimize resources and manage budgets efficiently. Most LLMs use a token-based pricing system, where the cost depends on the number of tokens generated between the model and the user.

The LLM Cost dashboard helps you track the cost and usage of LLM models. It provides metrics to analyze the following parameters:

  • Tokens and their cost
  • Retrieval Augmented Generation (RAG) latency and score
  • Graphical Processing Unit (GPU) usage

To view the dashboard

  1. From the navigation menu, click Dashboards.
  2. Search for the AIOps Observability folder and select it.
  3. Click LLM Cost.
    The dashboard is displayed.

    llm_cost_dashboard.png

Metrics in the LLM Cost dashboard

LLM Usage

PanelDescription
Total TokensDisplays the total number of tokens processed by the model during a given operation.
Cost Per TokenDisplays the total cost incurred per token for using the LLM during a given operation.
Total CostDisplays the total cost incurred for using the LLM during a given operation.
LatencyDisplays the time required by the LLM to process a request and return a response.
Rag Documents RetrievedDisplays the number of documents that were retrieved while using  the Retrieval Augmented Generation (RAG) system with the LLM.
Rag LatencyDisplays the latency (response time) of the RAG system while using the LLM.
Rag Relevance ScoreDisplays a relevance score that indicates how relevant the retrieved information is to the query in the RAG system.
Top 5 GenAI Models by Token UsageDisplays the bar chart that shows the top five models according to token usage.
Latency TrendDisplays the latency trend of the model to process a request and return a response for a selected period.
Avg Token Consumption vs Avg Usage CostDisplays the comparison of the average number of tokens consumed and the average cost of token usage.
Rag Latency TrendDisplays the latency trend of the RAG system while using the LLM for a selected period.

LLM GPU Usage

PanelDescription
GPU Power UsageDisplays the power usage (in watts) of the Graphical Processing Unit (GPU) at a given moment.
GPU TemperatureDisplays the temperature (in degree Celsius) of the GPU.
GPU Memory UsedDisplays the GPU memory (in MB) that is currently being used.
CPU Memory UtilizationDisplays the percentage usage of CPU memory that is used for data transfers. 
GPU Utilization

Displays the percentage usage of GPU at a given moment. This metric indicates how much of the GPU compute resources (cores and processing units) are being utilized for tasks, such as computations, rendering, or machine learning operations.

 

 

Tip: For faster searching, add an asterisk to the end of your partial query. Example: cert*
OSZAR »