- Defined columns with specific data types
- Constraints like nullable, unique, and default values
- Comments for documentation
- PostgreSQL compatibility for familiar SQL operations
Creating Tables
Creates a new structured table in a Knowledge Base - Defines a table schema with columns, data types, constraints, and documentation.Column objects to the columns list.
Columns
Each column can have the following properties:The name of the column.
The PostgreSQL data type of the column.
Whether the column can contain NULL values.
Whether the column values must be unique.
Default value for the column (can use SQL functions like NOW()).
Description or comment about the column.
Data Types
The following PostgreSQL data types are supported: Integerint- 32-bit integersmallint- 16-bit integerbigint- 64-bit integer
real- 32-bit floating pointdouble_precision- 64-bit floating pointnumeric(p,s)- Decimal with precision and scale (e.g.,numeric(10,2))
text- Variable-length character stringsvarchar(n)- Variable character with length limit (e.g.,varchar(255))char(n)- Fixed-length character (e.g.,char(10))ltree- Hierarchical tree-like structures
boolean- True/false values
date- Date onlytimestamp- Date and time (without timezone)timestamptz- Date and time with timezone
json- JSON datajsonb- Binary JSON data (more efficient)
Importing Data
When importing data, all modes use an upsert strategy. This means that for every row in your import:- If a record with the same primary key already exists in the table, it will be updated with the new values from your import (all fields are replaced by the new data).
- If no matching record exists, a new record will be inserted.
For more granular control over update behavior (like updating only specific fields), use SQL queries with custom UPDATE statements.
Import from Records
Takes a list of dictionaries and inserts them as rows in the specified table.Import from CSV
Reads CSV files and loads the data into existing tables with schema validation.users.csv
CSV Requirements:
- Null Values: Leave cells empty (as shown in row 2 for the
namefield) to represent NULL values. Do not use text like βnullβ or βNULLβ - these will be treated as literal strings. - Encoding: Files must be UTF-8 encoded. Other encodings or UTF-8 variants will not be accepted.
Troubleshooting
Common issues you might encounter when importing data:Record Structure Issues
Record Structure Issues
Problem: Missing fields or inconsistent data structure errors.Solution: Ensure all records have identical field structures. The first record determines the expected schema for all subsequent records.
Constraint Violations (Foreign Key & Duplicate Key)
Constraint Violations (Foreign Key & Duplicate Key)
Problem: Import fails due to constraint errors, such as βforeign key constraint fails,β βduplicate key,β or βunique constraintβ violations.Solution:
- Foreign Key Constraints: Import data in dependency order:
- Parent tables first (referenced tables)
- Child tables second (tables with foreign keys)
- Duplicate/Unique Constraints: Before importing, verify that new records do not conflict with unique constraints.
Querying Data
SQL
Unleash the full power of PostgreSQL against your knowledge base tables. Execute sophisticated queries with advanced functions, window operations, CTEs, joins, aggregations, and everything PostgreSQL has to offer.Available Extensions:
postgis: geography operationspg_trgm: trigram matchingunaccent: accent-insensitive text operationsltree: hierarchical tree-like structuresvector: vector similarity operations
In PostgreSQL, table names are case-sensitive - use double quotes to escape them.
Semantic Search
Discover contextually relevant data with semantic search that finds what youβre looking for, even when you donβt know the exact words.For semantic search capabilities, youβll need to create an embedding first.
fts_columns to specify which columns should be included in full-text search. When combined with embedding_names, you get hybrid search that leverages both semantic understanding and keyword matching.
For a complete list of parameters and their descriptions, see the API reference.
Querying with multiple embeddings is not supported yet, but the API is ready and this feature is coming soon.
Managing Tables
List All Tables
Returns metadata for all tables in the specified knowledge base.Get Table Details
Retrieves the complete definition of a specific table - Returns table metadata including column definitions, constraints, and comments.Get Table SQL Definition
Returns the exact SQL definition that would recreate the table.Get All Tables SQL Definition
Returns the SQL definitions for all tables in the knowledge base.Delete Table
Completely removes the table structure and data from the knowledge base.Modifying Tables
Add Column
Adds a new column to an existing table.Drop Column
Removes a column from an existing table.Rename Column
Changes the name of an existing column.Rename Table
Changes the name of an existing table.Adding Constraints
Add Foreign Key
Creates a foreign key relationship between tables.
What happens with this constraint:
- Data integrity: You can only insert
customer_idvalues in Posts that exist asidvalues in Users - Cascade delete: When a User is deleted, all their Posts are automatically deleted too
- Referential consistency: The database ensures the relationship between tables is always valid
- Query optimization: The database can optimize joins between these tables more efficiently
Add Check Constraint
Adds a validation rule to ensure data meets specific conditions.What happens with this constraint:
- Data validation: Every insert or update must satisfy the check condition
- Automatic rejection: Records that donβt meet the criteria are automatically rejected
Add Unique Constraint
Ensures that values in specified columns are unique across rows.What happens with this constraint:
- Uniqueness enforcement: No two rows can have the same combination of values in the specified columns
- Composite uniqueness: When multiple columns are specified, the combination must be unique (individual columns can still have duplicates)