Python API
DSLParser
The main entry point for using Pivotal programmatically.
parse(source)
Parse a Pivotal source string into an abstract syntax tree (list of AST nodes).
Returns: A list of AST node dicts, or {'error': '...'} on parse failure.
generate_code(results, backend='pandas')
Generate code from parsed AST nodes.
Parameters:
| Parameter | Type | Description |
|---|---|---|
results |
list | AST from parse() |
backend |
str | 'pandas' (default), 'duckdb', or 'sql' |
Returns: A list of code strings (one per logical block). Join them to produce a complete script:
execute(source, globals_dict, backend='pandas', verbose=True)
Parse and execute Pivotal source in one call.
tables = parser.execute("""
load "data/sales.csv" as sales
with sales as summary
group by region
agg sum revenue as total
sort total desc
""", globals())
Parameters:
| Parameter | Type | Description |
|---|---|---|
source |
str | Pivotal DSL source |
globals_dict |
dict | Namespace to execute in — pass globals() or a dict |
backend |
str | 'pandas' (default), 'duckdb', or 'polars' |
verbose |
bool | Print table shape and preview after each step (default: True) |
Returns: A dict of {table_name: DataFrame} for every table produced.
Pass a custom dict to share variables or isolate execution:
ns = {'threshold': 1000, 'sales': existing_df}
tables = parser.execute("""
with sales as filtered
filter amount > :threshold
""", ns)
filtered = tables['filtered']
export(source, backend='pandas')
Generate a clean, standalone Python script from Pivotal source.
script = parser.export(source, backend='duckdb')
with open('analysis.py', 'w') as f:
f.write(script)
Returns: A string containing the complete Python script including imports.
Package
Manage data packages — collections of tables and charts saved to disk.
Package.export(name, globals, path=None, fmt='csv', chart_fmt='png', include=None, exclude=None)
Save all tables and charts from the current session to a data package.
Parameters:
| Parameter | Type | Description |
|---|---|---|
name |
str | Package name (becomes directory name) |
globals |
dict | The calling scope's globals (pass globals()) |
path |
str | Output directory (default: current directory) |
fmt |
str | Data format: 'csv' (default) or 'parquet' |
chart_fmt |
str | Chart format: 'png' (default) or 'svg' |
include |
list | Object names to include (default: all) |
exclude |
list | Object names to exclude (default: none) |
Package.open(name, path=None)
Load a previously saved package.
Returns: A Package object.
pkg.load_all()
Load all tables from the package into a dict:
pkg.load_table(name)
Load a single table by name:
Notebook export functions
notebook_to_python(path, backend='pandas')
Export a Jupyter notebook to a Python or SQL file.
from pivotal.__main__ import notebook_to_python
notebook_to_python('analysis.ipynb', backend='duckdb')
# creates analysis.py
Parameters:
| Parameter | Type | Description |
|---|---|---|
path |
str | Absolute path to .ipynb file |
backend |
str | 'pandas', 'duckdb', or 'sql' |
%%pivotalcells are parsed and compiled to the target backend- Regular Python cells are included as-is (except for
backend='sql') - GUI cells (
pivotal.*_gui()) are skipped
notebook_to_pivotal(path)
Export a Jupyter notebook to a .pivotal file.
from pivotal.__main__ import notebook_to_pivotal
notebook_to_pivotal('analysis.ipynb')
# creates analysis.pivotal
%%pivotalcells are written as-is (DSL source only, magic line stripped)- Regular Python cells are wrapped in
python...endblocks in the exported.pivotalfile - GUI cells are skipped
The resulting .pivotal file is fully executable with the Pivotal CLI.