Python SDK

The official CredVault SDK for Python applications. Designed for data scientists, backend developers, and automation engineers, this SDK makes it easy to integrate CredVault into your Python projects.

Why Use the Python SDK?

Python is the language of choice for data science and machine learning, making it a natural fit for CredVault's Intelligence Engine. The SDK works seamlessly in Jupyter notebooks, allowing you to explore data and train models interactively.

For backend development, the SDK integrates with Flask, Django, FastAPI, and other popular frameworks. The API design follows Python conventions, using snake_case naming and returning native Python types.

The SDK supports Python 3.8 and later. All I/O operations use synchronous methods by default, making it easy to use in scripts and notebooks. For high-concurrency applications, async support is available.

Installation

The SDK is available on PyPI, Python's package repository. Install it using pip, poetry, or your preferred package manager:

pip install credvault

Connecting to CredVault

Create a client instance to start making API calls. The client requires authentication credentials and optionally accepts a custom base URL for enterprise deployments.

import os
from credvault import CredVault

client = CredVault(
    api_key=os.environ.get("CREDVAULT_API_KEY")
)

For most applications, authenticate with an API key. Store your API key in an environment variable or a secure configuration file. Avoid hardcoding credentials in your source code.

For applications that act on behalf of users, authenticate with a bearer token. Tokens are obtained through the login endpoint and should be refreshed before expiration.

Working with Data

The data resource provides methods for interacting with your database clusters.

Listing clusters retrieves all clusters in your account. Use this to discover available databases or build administrative interfaces.

Querying collections fetches documents matching specified criteria. The method accepts a dictionary for filtering and supports various query operators.

Inserting documents adds new records to collections. Pass a list of dictionaries to insert multiple documents efficiently.

Updating documents modifies existing records. Specify a filter to select documents and provide the changes to apply.

Deleting documents removes records permanently. Always double-check your filter to avoid unintended data loss.

Using the Intelligence Engine

The Python SDK excels at working with CIE, the CredVault Intelligence Engine.

Uploading datasets prepares your data for model training. The SDK handles file uploads efficiently, supporting large datasets.

Training models starts a training job with your configuration. Models train on CredVault's infrastructure, freeing your local machine for other work.

Running predictions invokes your trained models. Pass input data and receive predictions in response.

These capabilities integrate naturally with popular data science libraries. Load data from CredVault into pandas DataFrames, train models, and deploy them back to the platform.

Handling Errors

The SDK raises descriptive exceptions when operations fail. Use try/except blocks to handle errors appropriately.

Authentication errors occur when credentials are invalid or expired. Not found errors indicate a resource doesn't exist. Validation errors happen when request parameters are malformed.

Resources Available

All platform capabilities are accessible through the SDK:

  • auth — User authentication and sessions
  • data — Database clusters and collections
  • cie — Machine learning training and inference
  • webhooks — Event notifications
  • functions — Serverless function execution
  • triggers — Database event automation
  • backups — Backup management
  • schema — Schema and index operations
  • api_keys — API key lifecycle
  • metrics — Platform metrics
  • logs — Activity logging
  • notifications — Notification management
  • settings — Configuration options

Using in Notebooks

Jupyter notebooks are excellent for exploring data and prototyping. The SDK works naturally in this environment.

Start by importing the library and creating a client. Then fetch data from your clusters and work with it using pandas, numpy, or your preferred tools.

The interactive nature of notebooks makes them ideal for:

  • Exploring new datasets
  • Prototyping queries before implementing them in applications
  • Testing machine learning models during development
  • Creating data analysis reports

Best Practices

Use virtual environments. Isolate your project dependencies to avoid conflicts with Python projects.

Store credentials securely. Use environment variables or tools like python-dotenv to manage API keys.

Handle rate limits. Implement retry logic with exponential backoff for robust applications.

Keep the SDK updated. New features and bug fixes are released regularly.

For complete documentation, refer to the SDK's README on GitHub or explore the source code for implementation details.