Observability how-to guides
Step-by-step guides that cover key tasks and operations for adding observability to your LLM applications with LangSmith.
Tracing configuration
Set up LangSmith tracing to get visibility into your production applications.
Basic configuration
- Set your tracing project
 - Enable or disable tracing
 - Trace any Python or JS Code
 - Trace using the LangSmith REST API
 - Trace without environment variables
 
Trace natively supported libraries
- Trace with 
LangChain - Trace with 
LangGraph - Trace the OpenAI API client
 - Trace with 
Instructor(Python only) - Trace with the Vercel 
AI SDK(JS only) - Trace with OpenTelemetry
 
Advanced configuration
- Configure threads
 - Set a sampling rate for traces
 - Add metadata and tags to traces
 - Implement distributed tracing
 - Access the current span within a traced function
 - Log multimodal traces
 - Log retriever traces
 - Log custom LLM traces / provide custom token counts
 - Prevent logging of sensitive data in traces
 - Trace generator functions
 - Calculate token-based costs for traces
 - Trace JS functions in serverless environments
 - Troubleshoot trace testing
 - Upload files with traces
 - Print out logs from the LangSmith SDK (Python Only)
 
Tracing projects UI & API
View and interact with your traces to debug your applications.
- Filter traces in a project
 - Save a filter for your project
 - Export traces using the SDK (low volume)
 - Bulk exporting traces (high volume)
 - Share or unshare a trace publicly
 - Compare traces
 - View threads
 
Dashboards
Use LangSmith custom and built-in dashboards to gain insight into your production systems.
Automations
Leverage LangSmith's powerful monitoring, automation, and online evaluation features to make sense of your production data.