LangChain

The Timbr LangChain SDK integrates natural language inputs with Timbr's ontology-driven semantic layer. Leveraging Timbr's robust ontology capabilities, the SDK integrates with Timbr data models and leverages semantic relationships and annotations, enabling users to query data in a business-friendly language.
It leverages Large Language Models (LLMs) to interpret user queries, generate Timbr SQL, and fetch query results from the database, enabling seamless interaction with any database connected to Timbr.
The SDK translates user questions into Timbr SQL, automatically aligning with the ontology's structure. This ensures context-aware results that reflect the underlying semantic relationships.
Requirements
To use this SDK, ensure you have the following:
- Python Version: Python 3.9.13
- Required Dependencies:
langchain==0.3.25
langchain_community==0.3.20
langgraph==0.3.20
pytimbr-api>=1.0.8
pydantic==2.10.4 - Optional Dependencies (Depending on LLM):
langchain-anthropic==0.3.1
anthropic==0.42.0
langchain-openai==0.3.16
openai==1.77.0
Installation
Using pip
python -m pip install langchain-timbr
Install with selected LLM providers
One of: openai, anthropic, google, azure_openai, snowflake, databricks (or 'all')
python -m pip install 'langchain-timbr[<your selected providers, separated by comma w/o space>]'
Configuration
All chains, agents, and nodes support optional environment-based configuration. You can set the following environment variables to provide default values and have easy setup for the provided tools:
Timbr Connection Parameters
- TIMBR_URL: Default Timbr server URL
- TIMBR_TOKEN: Default Timbr authentication token
- TIMBR_ONTOLOGY: Default ontology/knowledge graph name
When these environment variables are set, the corresponding parameters (url, token, ontology) become optional in all chain and agent constructors and will use the environment values as defaults.
LLM Configuration Parameters
- LLM_TYPE: The type of LLM provider (One of langchain_timbr LlmTypes enum: 'openai-chat', 'anthropic-chat', 'chat-google-generative-ai', 'azure-openai-chat', 'snowflake-cortex', 'chat-databricks')
- LLM_API_KEY: The API key for authenticating with the LLM provider
- LLM_MODEL: The model name or deployment to use
- LLM_TEMPERATURE: Temperature setting for the LLM
- LLM_ADDITIONAL_PARAMS: Additional parameters as dict or JSON string
When LLM environment variables are set, the llm parameter becomes optional and will use the LlmWrapper with environment configuration.
Example environment setup:
# Timbr connection
export TIMBR_URL="https://your-timbr-app.com/"
export TIMBR_TOKEN="tk_XXXXXXXXXXXXXXXXXXXXXXXX"
export TIMBR_ONTOLOGY="timbr_knowledge_graph"
# LLM configuration
export LLM_TYPE="openai-chat"
export LLM_API_KEY="your-openai-api-key"
export LLM_MODEL="gpt-4o"
export LLM_TEMPERATURE="0.1"
export LLM_ADDITIONAL_PARAMS='{"max_tokens": 1000}'
Features
- Multi-LLM Support: Integrates with OpenAI GPT, Anthropic Claude, Google Gemini, Databricks DBRX/llama, Snowflake Cortex and Timbr’s native LLM (Or any custom LLM using LangChain interface)
- SQL Generation: Generate semantic SQL queries (ontology enriched queries).
- Knowledge Graph Access: Interact with ontologies in natural language and retrieve context-aware answers.
- Streamlined Querying: Combine natural language inputs with timbr using simple methods like
run_llm_query.
LangChain interface
Timbr SQL Agent
Create a Timbr SQL agent that wraps the pipeline to identify the relevant concept and generate the SQL query over the ontology.
Parameters:
| Parameter | Type / Default | Description |
|---|---|---|
| llm | LLM Default: None Optional | Language model instance (or a function taking a prompt string and returning an LLM's response). If None, uses LlmWrapper with environment variables. |
| url | str Default: None Optional | Timbr server URL. If None, uses the value from the TIMBR_URL environment variable. |
| token | str Default: None Optional | Timbr authentication token. If None, uses the value from the TIMBR_TOKEN environment variable. |
| ontology | str Default: None Optional | Name of the ontology/knowledge graph. If None, uses the value from the TIMBR_ONTOLOGY environment variable. |
| schema | str Default: None Optional | Name of the schema to query. |
| concept | str Default: None Optional | Name of a specific concept to query. |
| concepts_list | List[str] or str Default: None Optional | Collection of concepts to include (List of strings, or a string of concept names divided by comma). - If None, empty or '*', all available concepts are used. - If populated, only those concepts will be included in query generation. - If 'none' or 'null', no concepts will be used for the query. |
| views_list | List[str] or str Default: None Optional | Collection of views/cubes to include (List of strings, or a string of view/cube names divided by comma). - If None, empty or '*', all available views/cubes are used. - If populated, only those views/cubes will be included in query generation. - If 'none' or 'null', no views/cubes will be used for the query. |
| include_logic_concepts | bool Default: False Optional | Whether to include logic concepts (concepts without unique properties which only inherit from an upper level concept with filter logic) in the query. Note: This parameter has no effect when concepts_list is provided. |
| include_tags | List[str] or str Default: None Optional | Specific concept/property tag names to consider when generating the query. - If None or empty, no tags are used.- If a single string or list of strings is provided, only those tags (if they exist) will be attached to the prompt. - Use List of strings or a comma-separated string (e.g. 'tag1,tag2') to specify multiple tags.- Use '*' to include all tags. |
| exclude_properties | List[str] or str Default: None Optional | Collection of properties to exclude from the query (List of strings, or a string of property names divided by comma. entity_id, entity_type & entity_label are excluded by default). |
| should_validate_sql | bool Default: True Optional | Whether to validate the SQL before executing it. |
| retries | int Default: 2 Optional | Number of retry attempts if the generated SQL is invalid. |
| max_limit | int Default: 100 Optional | Maximum number of rows to return. |
| retry_if_no_results | bool Default: False Optional | Whether to infer the result value from the SQL query. If the query won't return any rows, it will try to re-generate the SQL query then re-run it. |
| no_results_max_retries | int Default: 2 Optional | Number of retry attempts to infer the result value from the SQL query. |
| generate_answer | bool Default: False Optional | Whether to generate a natural language answer from the query results. |
| note | str Default: None Optional | Additional note to extend the LLM prompt. |
| db_is_case_sensitive | bool Default: False Optional | Whether the database is case sensitive. |
| graph_depth | int Default: 1 Optional | Maximum number of relationship hops to traverse from the source concept during schema exploration. |
| verify_ssl | bool Default: True Optional | Whether to verify SSL certificates. |
| is_jwt | bool Default: False Optional | Whether to use JWT authentication. |
| jwt_tenant_id | str Default: None Optional | Tenant ID for JWT authentication (if applicable). |
| conn_params | dict Default: None Optional | Extra Timbr connection parameters sent with every request (e.g., 'x-api-impersonate-user'). |
Create agent and use with AgentExecutor
from langchain_timbr import TimbrSqlAgent
agent = TimbrSqlAgent(
# llm is optional if LLM environment variables are set
llm=<llm>, # optional: uses LlmWrapper with env vars if not specified
# url, token, ontology are optional if environment variables are set
url="https://your-timbr-app.com/", # optional: uses TIMBR_URL if not specified
token="tk_XXXXXXXXXXXXXXXXXXXXXXXX", # optional: uses TIMBR_TOKEN if not specified
ontology="timbr_knowledge_graph", # optional: uses TIMBR_ONTOLOGY/ONTOLOGY if not specified
schema="dtimbr", # optional
concept="Sales", # optional
concepts_list=["Sales","Orders","Customers"], # optional
views_list=["sales_view"], # optional
note="Focus on US region", # optional
retries=2, # optional
should_validate_sql=True # optional
)
agent_executor = AgentExecutor.from_agent_and_tools(
agent=agent,
tools=[],
verbose=True
)
result = agent_executor.invoke("What are the total sales for last month?")
rows = result["rows"]
sql = result["sql"]
concept = result["concept"]
schema = result["schema"]
error = result.get("error", None)
usage_metadata = result.get("usage_metadata", {})
determine_concept_usage = usage_metadata.get('determine_concept', {})
generate_sql_usage = usage_metadata.get('generate_sql', {})
# Each usage_metadata item contains:
# * 'approximate': Estimated token count calculated before invoking the LLM
# * 'input_tokens'/'output_tokens'/'total_tokens'/etc.: Actual token usage metrics returned by the LLM
Use create_timbr_sql_agent to get AgentExecutor instance and invoke directly
from langchain_timbr import create_timbr_sql_agent
agent_executor = create_timbr_sql_agent(
llm=<llm>, # optional - if not provided, uses LlmWrapper with environment variables
# url, token, ontology are optional if environment variables are set
url="https://your-timbr-app.com/", # optional: uses TIMBR_URL if not specified
token="tk_XXXXXXXXXXXXXXXXXXXXXXXX", # optional: uses TIMBR_TOKEN if not specified
ontology="timbr_knowledge_graph", # optional: uses TIMBR_ONTOLOGY/ONTOLOGY if not specified
schema="dtimbr", # optional
concept="Sales", # optional
concepts_list=["Sales","Orders","Customers"], # optional
views_list=["sales_view"], # optional
note="Focus on US region", # optional
)
result = agent_executor.invoke("What are the total sales for last month?")
rows = result["rows"]
sql = result["sql"]
concept = result["concept"]
schema = result["schema"]
error = result.get("error", None)
Identify Concept Chain
Returns the suggested concept to query based on the user question.
Parameters:
| Parameter | Type / Default | Description |
|---|---|---|
| llm | LLM Default: None Optional | Language model instance (or a function taking a prompt string and returning an LLM's response). If None, uses LlmWrapper with environment variables. |
| url | str Default: None Optional | Timbr server URL. If None, uses the value from the TIMBR_URL environment variable. |
| token | str Default: None Optional | Timbr authentication token. If None, uses the value from the TIMBR_TOKEN environment variable. |
| ontology | str Default: None Optional | Name of the ontology/knowledge graph. If None, uses the value from the TIMBR_ONTOLOGY environment variable. |
| concepts_list | List[str] or str Default: None Optional | Collection of concepts to include (List of strings, or a string of concept names divided by comma). - If None, empty or '*', all available concepts are used. - If populated, only those concepts will be included in query generation. - If 'none' or 'null', no concepts will be used for the query. |
| views_list | List[str] or str Default: None Optional | Collection of views/cubes to include (List of strings, or a string of view/cube names divided by comma). - If None, empty or '*', all available views/cubes are used. - If populated, only those views/cubes will be included in query generation. - If 'none' or 'null', no views/cubes will be used for the query. |
| include_logic_concepts | bool Default: False Optional | Whether to include logic concepts (concepts without unique properties which only inherit from an upper level concept with filter logic) in the query. Note: This parameter has no effect when concepts_list is provided. |
| include_tags | List[str] or str Default: None Optional | Specific concept/property tag names to consider when generating the query. - If None or empty, no tags are used.- If a single string or list of strings is provided, only those tags (if they exist) will be attached to the prompt. - Use List of strings or a comma-separated string (e.g. 'tag1,tag2') to specify multiple tags.- Use '*' to include all tags. |
| should_validate_sql | bool Default: True Optional | Whether to validate the identified concept before returning it. |
| retries | int Default: 2 Optional | Number of retry attempts if the identified concept is invalid. |
| note | str Default: None Optional | Additional note to extend the LLM prompt. |
| verify_ssl | bool Default: True Optional | Whether to verify SSL certificates. |
| is_jwt | bool Default: False Optional | Whether to use JWT authentication. |
| jwt_tenant_id | str Default: None Optional | Tenant ID for JWT authentication (if applicable). |
| conn_params | dict Default: None Optional | Extra Timbr connection parameters sent with every request (e.g., 'x-api-impersonate-user'). |
from langchain_timbr import IdentifyTimbrConceptChain
identify_timbr_concept_chain = IdentifyTimbrConceptChain(
# llm is optional if LLM environment variables are set
llm=<llm>, # optional: uses LlmWrapper with env vars if not specified
# url, token, ontology are optional if environment variables are set
url="https://your-timbr-app.com/", # optional: uses TIMBR_URL if not specified
token="tk_XXXXXXXXXXXXXXXXXXXXXXXX", # optional: uses TIMBR_TOKEN if not specified
ontology="timbr_knowledge_graph", # optional: uses TIMBR_ONTOLOGY/ONTOLOGY if not specified
concepts_list=["Sales","Orders"], # optional
views_list=["sales_view"], # optional
note="Focus on last month's data" # optional
)
result = identify_timbr_concept_chain.invoke({ "prompt": "What are the total sales for last month?" })
concept = result["concept"]
schema = result["schema"]
usage_metadata = result.get("identify_concept_usage_metadata", {})
determine_concept_usage = usage_metadata.get('determine_concept', {})
# Each usage_metadata item contains:
# * 'approximate': Estimated token count calculated before invoking the LLM
# * 'input_tokens'/'output_tokens'/'total_tokens'/etc.: Actual token usage metrics returned by the LLM
Generate SQL Chain
Returns the suggested SQL based on the user question.
Parameters:
| Parameter | Type / Default | Description |
|---|---|---|
| llm | LLM Default: None Optional | Language model instance (or a function taking a prompt string and returning an LLM's response). If None, uses LlmWrapper with environment variables. |
| url | str Default: None Optional | Timbr server URL. If None, uses the value from the TIMBR_URL environment variable. |
| token | str Default: None Optional | Timbr authentication token. If None, uses the value from the TIMBR_TOKEN environment variable. |
| ontology | str Default: None Optional | Name of the ontology/knowledge graph. If None, uses the value from the TIMBR_ONTOLOGY environment variable. |
| schema | str Default: None Optional | Name of the schema to query. |
| concept | str Default: None Optional | Name of a specific concept to query. |
| concepts_list | List[str] or str Default: None Optional | Collection of concepts to include (List of strings, or a string of concept names divided by comma). - If None, empty or '*', all available concepts are used. - If populated, only those concepts will be included in query generation. - If 'none' or 'null', no concepts will be used for the query. |
| views_list | List[str] or str Default: None Optional | Collection of views/cubes to include (List of strings, or a string of view/cube names divided by comma). - If None, empty or '*', all available views/cubes are used. - If populated, only those views/cubes will be included in query generation. - If 'none' or 'null', no views/cubes will be used for the query. |
| include_logic_concepts | bool Default: False Optional | Whether to include logic concepts (concepts without unique properties which only inherit from an upper level concept with filter logic) in the query. Note: This parameter has no effect when concepts_list is provided. |
| include_tags | List[str] or str Default: None Optional | Specific concept/property tag names to consider when generating the query. - If None or empty, no tags are used.- If a single string or list of strings is provided, only those tags (if they exist) will be attached to the prompt. - Use List of strings or a comma-separated string (e.g. 'tag1,tag2') to specify multiple tags.- Use '*' to include all tags. |
| exclude_properties | List[str] or str Default: None Optional | Collection of properties to exclude from the query (List of strings, or a string of property names divided by comma. entity_id, entity_type & entity_label are excluded by default). |
| should_validate_sql | bool Default: True Optional | Whether to validate the SQL before executing it. |
| retries | int Default: 2 Optional | Number of retry attempts if the generated SQL is invalid. |
| max_limit | int Default: 100 Optional | Maximum number of rows to return. |
| note | str Default: None Optional | Additional note to extend the LLM prompt. |
| db_is_case_sensitive | bool Default: False Optional | Whether the database is case sensitive. |
| graph_depth | int Default: 1 Optional | Maximum number of relationship hops to traverse from the source concept during schema exploration. |
| verify_ssl | bool Default: True Optional | Whether to verify SSL certificates. |
| is_jwt | bool Default: False Optional | Whether to use JWT authentication. |
| jwt_tenant_id | str Default: None Optional | Tenant ID for JWT authentication (if applicable). |
| conn_params | dict Default: None Optional | Extra Timbr connection parameters sent with every request (e.g., 'x-api-impersonate-user'). |
from langchain_timbr import GenerateTimbrSqlChain
generate_timbr_sql_chain = GenerateTimbrSqlChain(
# llm is optional if LLM environment variables are set
llm=<llm>, # optional: uses LlmWrapper with env vars if not specified
# url, token, ontology are optional if environment variables are set
url="https://your-timbr-app.com/", # optional: uses TIMBR_URL if not specified
token="tk_XXXXXXXXXXXXXXXXXXXXXXXX", # optional: uses TIMBR_TOKEN if not specified
ontology="timbr_knowledge_graph", # optional: uses TIMBR_ONTOLOGY/ONTOLOGY if not specified
schema="dtimbr", # optional
concept="Sales", # optional
concepts_list=["Sales","Orders"], # optional
views_list=["sales_view"], # optional
note="We only need sums" # optional
)
result = generate_timbr_sql_chain.invoke({ "prompt": "What are the total sales for last month?" })
sql = result["sql"]
concept = result["concept"]
schema = result["schema"]
usage_metadata = result.get("generate_sql_usage_metadata", {})
determine_concept_usage = usage_metadata.get('determine_concept', {})
generate_sql_usage = usage_metadata.get('generate_sql', {})
# Each usage_metadata item contains:
# * 'approximate': Estimated token count calculated before invoking the LLM
# * 'input_tokens'/'output_tokens'/'total_tokens'/etc.: Actual token usage metrics returned by the LLM
Validate SQL Chain
Validates the timbr SQL and re-generate a new one if necessary based on the user question.
Parameters:
| Parameter | Type / Default | Description |
|---|---|---|
| llm | LLM Default: None Optional | Language model instance (or a function taking a prompt string and returning an LLM's response). If None, uses LlmWrapper with environment variables. |
| url | str Default: None Optional | Timbr server URL. If None, uses the value from the TIMBR_URL environment variable. |
| token | str Default: None Optional | Timbr authentication token. If None, uses the value from the TIMBR_TOKEN environment variable. |
| ontology | str Default: None Optional | Name of the ontology/knowledge graph. If None, uses the value from the TIMBR_ONTOLOGY environment variable. |
| schema | str Default: None Optional | Name of the schema to query. |
| concept | str Default: None Optional | Name of a specific concept to query. |
| retries | int Default: 2 Optional | Number of retry attempts if the generated SQL is invalid. |
| concepts_list | List[str] or str Default: None Optional | Collection of concepts to include (List of strings, or a string of concept names divided by comma). - If None, empty or '*', all available concepts are used. - If populated, only those concepts will be included in query generation. - If 'none' or 'null', no concepts will be used for the query. |
| views_list | List[str] or str Default: None Optional | Collection of views/cubes to include (List of strings, or a string of view/cube names divided by comma). - If None, empty or '*', all available views/cubes are used. - If populated, only those views/cubes will be included in query generation. - If 'none' or 'null', no views/cubes will be used for the query. |
| include_logic_concepts | bool Default: False Optional | Whether to include logic concepts (concepts without unique properties which only inherit from an upper level concept with filter logic) in the query. Note: This parameter has no effect when concepts_list is provided. |
| include_tags | List[str] or str Default: None Optional | Specific concept/property tag names to consider when generating the query. - If None or empty, no tags are used.- If a single string or list of strings is provided, only those tags (if they exist) will be attached to the prompt. - Use List of strings or a comma-separated string (e.g. 'tag1,tag2') to specify multiple tags.- Use '*' to include all tags. |
| exclude_properties | List[str] or str Default: None Optional | Collection of properties to exclude from the query (List of strings, or a string of property names divided by comma. entity_id, entity_type & entity_label are excluded by default). |
| max_limit | int Default: 100 Optional | Maximum number of rows to return. |
| note | str Default: None Optional | Additional note to extend the LLM prompt. |
| db_is_case_sensitive | bool Default: False Optional | Whether the database is case sensitive. |
| graph_depth | int Default: 1 Optional | Maximum number of relationship hops to traverse from the source concept during schema exploration. |
| verify_ssl | bool Default: True Optional | Whether to verify SSL certificates. |
| is_jwt | bool Default: False Optional | Whether to use JWT authentication. |
| jwt_tenant_id | str Default: None Optional | Tenant ID for JWT authentication (if applicable). |
| conn_params | dict Default: None Optional | Extra Timbr connection parameters sent with every request (e.g., 'x-api-impersonate-user'). |
from langchain_timbr import ValidateTimbrSqlChain
validate_timbr_sql_chain = ValidateTimbrSqlChain(
llm=<llm>, # optional: uses LlmWrapper with env vars if not specified
# url, token, ontology are optional if environment variables are set
url="https://your-timbr-app.com/", # optional: uses TIMBR_URL if not specified
token="tk_XXXXXXXXXXXXXXXXXXXXXXXX", # optional: uses TIMBR_TOKEN if not specified
ontology="timbr_knowledge_graph", # optional: uses TIMBR_ONTOLOGY/ONTOLOGY if not specified
schema="dtimbr", # optional
concept="Sales", # optional
concepts_list=["Sales","Orders"], # optional
views_list=["sales_view"], # optional
note="We only need sums", # optional
retries=3
)
result = validate_timbr_sql_chain.invoke({
"prompt": "What are the total sales for last month?",
"sql": "SELECT SUM(amount) FROM sales WHERE date > current_date - INTERVAL '1 month'"
})
validated_sql = result["sql"]
is_sql_valid = result["is_sql_valid"]
error = result["error"]
concept = result["concept"]
schema = result["schema"]
usage_metadata = result.get("validate_sql_usage_metadata", {})
determine_concept_usage = usage_metadata.get('determine_concept', {})
generate_sql_usage = usage_metadata.get('generate_sql', {})
# Each usage_metadata item contains:
# * 'approximate': Estimated token count calculated before invoking the LLM
# * 'input_tokens'/'output_tokens'/'total_tokens'/etc.: Actual token usage metrics returned by the LLM
Execute Query Chain
Calls the Generate SQL Chain and executes the query in timbr. Returns the query results.
Parameters:
| Parameter | Type / Default | Description |
|---|---|---|
| llm | LLM Default: None Optional | Language model instance (or a function taking a prompt string and returning an LLM's response). If None, uses LlmWrapper with environment variables. |
| url | str Default: None Optional | Timbr server URL. If None, uses the value from the TIMBR_URL environment variable. |
| token | str Default: None Optional | Timbr authentication token. If None, uses the value from the TIMBR_TOKEN environment variable. |
| ontology | str Default: None Optional | Name of the ontology/knowledge graph. If None, uses the value from the TIMBR_ONTOLOGY environment variable. |
| schema | str Default: None Optional | Name of the schema to query. |
| concept | str Default: None Optional | Name of a specific concept to query. |
| concepts_list | List[str] or str Default: None Optional | Collection of concepts to include (List of strings, or a string of concept names divided by comma). - If None, empty or '*', all available concepts are used. - If populated, only those concepts will be included in query generation. - If 'none' or 'null', no concepts will be used for the query. |
| views_list | List[str] or str Default: None Optional | Collection of views/cubes to include (List of strings, or a string of view/cube names divided by comma). - If None, empty or '*', all available views/cubes are used. - If populated, only those views/cubes will be included in query generation. - If 'none' or 'null', no views/cubes will be used for the query. |
| include_logic_concepts | bool Default: False Optional | Whether to include logic concepts (concepts without unique properties which only inherit from an upper level concept with filter logic) in the query. Note: This parameter has no effect when concepts_list is provided. |
| include_tags | List[str] or str Default: None Optional | Specific concept/property tag names to consider when generating the query. - If None or empty, no tags are used.- If a single string or list of strings is provided, only those tags (if they exist) will be attached to the prompt. - Use List of strings or a comma-separated string (e.g. 'tag1,tag2') to specify multiple tags.- Use '*' to include all tags. |
| exclude_properties | List[str] or str Default: None Optional | Collection of properties to exclude from the query (List of strings, or a string of property names divided by comma. entity_id, entity_type & entity_label are excluded by default). |
| should_validate_sql | bool Default: True Optional | Whether to validate the SQL before executing it. |
| retries | int Default: 2 Optional | Number of retry attempts if the generated SQL is invalid. |
| max_limit | int Default: 100 Optional | Maximum number of rows to return. |
| retry_if_no_results | bool Default: False Optional | Whether to infer the result value from the SQL query. If the query won't return any rows, it will try to re-generate the SQL query then re-run it. |
| no_results_max_retries | int Default: 2 Optional | Number of retry attempts to infer the result value from the SQL query. |
| note | str Default: None Optional | Additional note to extend the LLM prompt. |
| db_is_case_sensitive | bool Default: False Optional | Whether the database is case sensitive. |
| graph_depth | int Default: 1 Optional | Maximum number of relationship hops to traverse from the source concept during schema exploration. |
| verify_ssl | bool Default: True Optional | Whether to verify SSL certificates. |
| is_jwt | bool Default: False Optional | Whether to use JWT authentication. |
| jwt_tenant_id | str Default: None Optional | Tenant ID for JWT authentication (if applicable). |
| conn_params | dict Default: None Optional | Extra Timbr connection parameters sent with every request (e.g., 'x-api-impersonate-user'). |
from langchain_timbr import ExecuteTimbrQueryChain
execute_timbr_query_chain = ExecuteTimbrQueryChain(
llm=<llm>, # optional: uses LlmWrapper with env vars if not specified
# url, token, ontology are optional if environment variables are set
url="https://your-timbr-app.com/", # optional: uses TIMBR_URL if not specified
token="tk_XXXXXXXXXXXXXXXXXXXXXXXX", # optional: uses TIMBR_TOKEN if not specified
ontology="timbr_knowledge_graph", # optional: uses TIMBR_ONTOLOGY/ONTOLOGY if not specified
schema="dtimbr", # optional
concept="Sales", # optional
concepts_list=["Sales","Orders"], # optional
views_list=["sales_view"], # optional
note="We only need sums", # optional
retries=3, # optional
should_validate_sql=True # optional
)
result = execute_timbr_query_chain.invoke({ "prompt": "What are the total sales for last month?" })
rows = result["rows"]
sql = result["sql"]
concept = result["concept"]
schema = result["schema"]
error = result.get("error", None)
usage_metadata = result.get("execute_timbr_usage_metadata", {})
determine_concept_usage = usage_metadata.get('determine_concept', {})
generate_sql_usage = usage_metadata.get('generate_sql', {})
# Each usage_metadata item contains:
# * 'approximate': Estimated token count calculated before invoking the LLM
# * 'input_tokens'/'output_tokens'/'total_tokens'/etc.: Actual token usage metrics returned by the LLM
Generate Answer Chain
Generates answer based on the prompt and query results.
Parameters:
| Parameter | Type / Default | Description |
|---|---|---|
| llm | LLM Default: None Optional | Language model instance (or a function taking a prompt string and returning an LLM's response). If None, uses LlmWrapper with environment variables. |
| url | str Default: None Optional | Timbr server URL. If None, uses the value from the TIMBR_URL environment variable. |
| token | str Default: None Optional | Timbr authentication token. If None, uses the value from the TIMBR_TOKEN environment variable. |
| verify_ssl | bool Default: True Optional | Whether to verify SSL certificates. |
| is_jwt | bool Default: False Optional | Whether to use JWT authentication. |
| jwt_tenant_id | str Default: None Optional | Tenant ID for JWT authentication (if applicable). |
| conn_params | dict Default: None Optional | Extra Timbr connection parameters sent with every request (e.g., 'x-api-impersonate-user'). |
from langchain_timbr import GenerateAnswerChain
generate_answer_chain = GenerateAnswerChain(
llm=<llm>, # optional: uses LlmWrapper with env vars if not specified
# url, token are optional if environment variables are set
url="https://your-timbr-app.com/", # optional: uses TIMBR_URL if not specified
token="tk_XXXXXXXXXXXXXXXXXXXXXXXX", # optional: uses TIMBR_TOKEN if not specified
)
answer_result = generate_answer_chain.invoke({
"prompt": "What are the total sales for last month?",
"rows": [{"total_sales": 1250000}],
"sql": "SELECT COUNT(1) AS `total_sales` FROM `dtimbr`.`order`",
})
answer = answer_result["answer"]
usage_metadata = result.get("generate_answer_usage_metadata", {})
answer_question_usage = usage_metadata.get('answer_question', {})
# Each usage_metadata item contains:
# * 'approximate': Estimated token count calculated before invoking the LLM
# * 'input_tokens'/'output_tokens'/'total_tokens'/etc.: Actual token usage metrics returned by the LLM
Quick Start using Timbr API
1. Initialize LLM instance
# Using env variables configurations
from langchain_timbr import LlmWrapper
llm = LlmWrapper()
# Using OpenAI provider:
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(
openai_api_key="<api_key>",
model_name="gpt-4o", # or any other preferred model
)
# Using Anthropic provider:
from langchain_anthropic import ChatAnthropic
llm = Claude(
anthropic_api_key="<api_key>",
model="claude-sonnet-4-20250514", # or any other preferred model
)
# Or any other LLM provider as your wish as ChatGoogleGenerativeAI, ChatSnowflakeCortex, AzureChatOpenAI, etc.
2. Initialize timbr agent executor
from langchain_timbr import create_timbr_sql_agent
agent_executor = create_timbr_sql_agent(
llm=llm,
url="https://your-timbr-app.com/",
token="tk_XXXXXXXXXXXXXXXXXXXXXXXX",
ontology="timbr_knowledge_graph",
schema="dtimbr", # optional
concept="Sales", # optional
concepts_list=["Sales","Orders","Customers"], # optional
views_list=["sales_view"], # optional
note="Focus on US region", # optional
generate_answer=False, # optional
)
3. Invoke agent with user question & fetch results
result = agent_executor.invoke("What are the total sales for last month?")
answer = result["answer"] # Relevant when generate_answer is True
rows = result["rows"]
sql = result["sql"]
schema = result["schema"]
concept = result["concept"]
user_metadata = result["user_metadata"] # Token usage metadata - estimated & from llm response
error = result.get("error", None)
Examples
Explore the examples/ directory for detailed use cases, including:
- Streamlit Integration: Build a user-friendly app to query Timbr interactively.
- LangChain Chains: Use
GenerateTimbrSqlChain,ExecuteTimbrQueryChain& more, for more granular control. - Custom Agents: Create custom agents with LangChain’s
AgentExecutorto handle complex workflows.
Timbr Methods Overview
get_ontologies
Fetch a list of available knowledge graphs in the Timbr environment.
ontologies_list = llm_connector.get_ontologies()
set_ontology
Set or switch the ontology for subsequent operations.
llm_connector.set_ontology("<ontology_name>")
determine_concept
Use the LLM to determine the appropriate concept and schema for a query.
concept_name, schema_name = llm_connector.determine_concept("Show me sales by region.")
generate_sql
Generate Timbr SQL for a user query.
sql_query = llm_connector.generate_sql("What is the revenue by product category?", concept_name="sales_data")
validate_sql
Validates Timbr SQL.
is_sql_valid, error = llm_connector.validate_sql("What is the revenue by product category?", sql_query="SELECT SUM(revenue) FROM sales GROUP BY product_category")
run_timbr_query
Run a Timbr SQL query and fetch results.
results = llm_connector.run_timbr_query("SELECT * FROM sales_data")
run_llm_query
Combine SQL generation and execution in a single step.
response = llm_connector.run_llm_query("What are the top 5 products by sales?")