AI guides
Guides and tricks about AI, LLMs and everything in between
All Articles
- All
- AI leadership
- Building AI systems
- Evals
- GenAI observability
- Industries
- LLMs
- Machine learning observability
- ML Metrics
- MLOps
- Use cases
- OTel for AI
Islands of Confidence: Make LLM Apps More Reliable by Running *Less* LLMs
In this article, I want to share a method to improve your LLM’s reliability, making...
Production ML for Practitioners: How to Accelerate Model Training with LightGBM & Optuna
Refer to Google Colab for code snippets. When it comes to the world of data...
The State of Production LLMs: My Takeaways from MLOps World 2023
Recently, I was lucky enough to attend MLOps World in Austin. There were panels, provoking...
How to Put Responsible AI into Practice
You’ve probably heard about the hallucinations AI can experience and the potential risks they introduce when left unchecked. From Amazon’s job recruiting models filtering out female candidates to...
The Framework for Building Great AI Products
In the first installation of our guide to building great AI products, we discussed the challenges of deploying models to production and shifting your mindset from ML models...
Understanding Embeddings in Machine Learning: Types, Alternatives, and Drift
Introduction Machine learning algorithms, specifically in NLP, LLM, and computer vision models, often deal with...
How to Build Great AI Products: Shifting Your Mindset
When deploying AI products, model accuracy is crucial, but it’s just the tip of the...
How to Optimize ML Fraud Detection: A Guide to Monitoring & Performance
Fraud detection is a mainstream machine learning (ML) use case. In recent years, the demand...
Dealing with Outliers in A/B testing: Methods and Best Practices
*Google collab with code snippets here. **Notebook tests use simple dummy data, not to simulate...
Recall: A Key Metric for Evaluating Model Performance
Measuring the performance of ML models is crucial, and the ML evaluation metric – Recall...
Understanding Binary Cross-Entropy and Log Loss for Effective Model Monitoring
Introduction Accurately evaluating model performance is essential for understanding how well your ML model is...
Root Mean Square Error (RMSE): The cornerstone for evaluating regression models
Today’s spotlight is on Root Mean Square Error (RMSE) – a pivotal evaluation metric commonly...