Professional Python tools for AI development: database management, LLM integration, document search, and scientific computing
Complete pgdbm skill set: PostgreSQL database management with async operations, migrations, and connection pooling. Use when building FastAPI apps, microservices, or multi-tenant SaaS with PostgreSQL.
Decision guide for choosing the right pgdbm pattern (standalone, dual-mode, or shared pool) based on deployment context.
Production pattern for pgdbm using one shared connection pool across multiple schemas or microservices. Critical for FastAPI apps and multi-service architectures.
Pattern for building reusable PyPI packages with pgdbm that work standalone or embedded. Use when creating database libraries.
Standalone service pattern for simple microservices with their own database. Simplest pgdbm setup.
Testing database code with pgdbm fixtures: test fixtures, test databases, pytest integration. Use when writing database tests.
Mental model and core pgdbm operations: connecting, querying, transactions, migrations. Start here for understanding pgdbm patterns.
Complete AsyncDatabaseManager and DatabaseConfig API reference with all methods, parameters, and configuration options.
Complete AsyncMigrationManager API reference with migration file format, checksum validation, and version control.
Common pgdbm mistakes and how to avoid them: pool multiplication, schema errors, template syntax violations. Use before implementing.
Complete llmemory skill set: SOTA RAG library with hybrid search (vector + BM25), query expansion (heuristic + LLM), reranking, query routing, contextual retrieval, and multi-tenant support. Use when building production RAG systems, semantic search, or document Q&A.
Getting started with llmemory: installation, initialization, adding documents, search operations, document management. Start here for llmemory basics.
Hybrid search combining vector similarity and BM25 full-text search with Reciprocal Rank Fusion (RRF). Alpha tuning, HNSW configuration, search optimization.
Query expansion with heuristic and LLM-based variants for improved retrieval recall. Multi-query search with RRF fusion across query variants.
Multi-tenant patterns with owner-based isolation for SaaS applications. Security implementation, FastAPI integration, workspace separation.
Building production RAG systems: document ingestion, hybrid search retrieval, query routing, reranking, prompt augmentation. Complete RAG pipeline patterns.
Complete llmring skill set: unified LLM interface for OpenAI, Anthropic, Google, and Ollama with streaming, tools, structured output, and multi-provider patterns. Use when building LLM applications.
Basic chat completions with llmring: unified interface, message structure, resource management. Start here for llmring basics.
Streaming responses with llmring: async iteration, real-time output, usage tracking. Use when building chat interfaces or displaying incremental responses.
Function calling and tool use with llmring: tool definitions, execution patterns, multi-turn conversations. Use when building agents or adding function calling.
Structured output with JSON schema: type-safe responses, data extraction, validation. Use when extracting structured data from LLMs or enforcing output schemas.
Lockfile configuration with llmring: semantic aliases, environment profiles, fallback models. Use when configuring model aliases or managing dev/staging/prod environments.
Multi-provider patterns with llmring: provider switching, raw SDK access, provider-specific features (caching, logprobs). Use when switching providers or accessing advanced features.