` elements with `` tags for headers and ` ` tags for data cells. The plugin's effectiveness is highly dependent on the quality of the input HTML.
## Notes
### Important Considerations
* **Supported File Type**: The plugin is hardcoded to only process files with the `.html` extension
* **Event Listener**: It operates on the `listener.on('file:created')` event
* **Multiple Tables**: Each `` element found in the HTML document will be extracted into its own separate sheet within the Flatfile workbook. Sheets are named sequentially: `Table_1`, `Table_2`, and so on
* **Header Extraction**: Headers are extracted from `` elements. If a table has no ` ` elements, the `headers` array for that sheet will be empty, and data rows will likely not be mapped correctly
### Limitations
* **`maxDepth` Limitation**: The `maxDepth` configuration option is defined in the options type but is not currently implemented in the parsing logic. Nested tables are processed, but their depth is not limited by this setting
* **`rowspan` Implementation**: The current implementation for `handleRowspan` may not function as expected because it attempts to re-parse trimmed text content of a cell to find an attribute, which is not possible. This feature should be considered unreliable
### Error Handling
* The primary method for diagnosing issues is to set the `debug` option to `true`. This will print detailed logs of the extraction process, including tables found, headers extracted, and cell data
* If a data row contains more cells than there are headers, a warning is logged (in debug mode) and the extra cell data is ignored to prevent data misalignment
* The underlying HTML parser is generally resilient to malformed HTML, but if the table structure (``, ``, ``, ` `) is invalid, the function may return an empty object or partially extracted data
# Overview
Source: https://flatfile.com/docs/plugins/index
Extend your Flatfile data engine with powerful plugins for extraction, transformation, validation, and integration
Flatfile plugins extend your data processing capabilities with pre-built, reusable modules for common data operations. They handle everything from file extraction and data transformation to validation and external integrations, allowing you to build sophisticated data workflows quickly.
## Open Source
All Flatfile plugins are **open source** and available on GitHub at [github.com/FlatFilers/flatfile-plugins](https://github.com/FlatFilers/flatfile-plugins).
## Plugin Categories
### 🔄 Transform
Plugins for data transformation, validation, and cleaning.
Run custom logic on individual data records in real-time
Automatically cast values to appropriate data types
Extend schemas with external validation constraints
Remove duplicate records via sheet-level actions
### 📤 Extractors
Plugins for parsing and extracting data from various file formats.
Parse delimited files (CSV, TSV, etc.)
Extract data from Excel spreadsheets
Parse JSON files and convert to tabular data
Extract structured data from PDF documents
### 📋 Schemas
Plugins for converting external schemas to Flatfile format.
Convert JSON Schema to Flatfile Blueprint
Convert OpenAPI schema to Flatfile Blueprint
Convert SQL DDL to Flatfile Blueprint
Convert YAML Schema to Flatfile Blueprint
### 📤 Export
Plugins for exporting processed data to external systems.
Export data from Flatfile to various formats
Send data to external webhooks
Export workbooks as delimited files in ZIP archives
Generate pivot tables from sheet data
### 🔧 Core
Essential plugins for Flatfile operations and configuration.
Configure Flatfile spaces with workbooks and settings
Handle Flatfile jobs with custom logic
Show only mapped columns in post-mapping view
Automatically roll out workbook changes
### ✅ Validation
Specialized validation plugins for data quality.
Comprehensive email address validation
Phone number formatting and validation
Date format normalization and validation
Number validation and formatting
### 🔗 Enrich
Plugins for data enrichment using external APIs.
Geocode addresses using Google Maps API
Analyze sentiment of text fields
Summarize and extract key phrases from text
Convert What3Words addresses to standard addresses
## Getting Started with Plugins
### Installation
Install plugins using npm in your Flatfile project:
```bash
npm install @flatfile/plugin-record-hook @flatfile/plugin-autocast
```
### Basic Usage
Add plugins to your listener:
```typescript
import { FlatfileListener } from '@flatfile/listener'
import { recordHook } from '@flatfile/plugin-record-hook'
import { autocast } from '@flatfile/plugin-autocast'
const listener = FlatfileListener.create(async (client) => {
// Add autocast for automatic type conversion
client.use(autocast())
// Add record hook for custom validation
client.use(
recordHook('customers', (record) => {
// Custom record processing logic
const email = record.get('email')
if (email && !isValidEmail(email)) {
record.addError('email', 'Invalid email format')
}
return record
})
)
})
```
### Plugin Configuration
Most plugins accept configuration options:
```typescript
import { configureSpace } from '@flatfile/plugin-space-configure'
import { dedupePlugin } from '@flatfile/plugin-dedupe'
client.use(
configureSpace({
workbooks: [workbookConfig],
space: {
name: "Customer Import",
namespace: "customers"
}
})
)
client.use(
dedupePlugin({
sheetSlug: 'customers',
dedupe: {
key: 'email',
keep: 'first',
custom: {
operation: 'dedupe-customers',
label: 'Remove Duplicate Customers'
}
}
})
)
```
## Plugin Development
### Creating Custom Plugins
You can create custom plugins for reusable functionality:
```typescript
import { FlatfilePlugin } from '@flatfile/listener'
export function myCustomPlugin(options: any): FlatfilePlugin {
return (client) => {
// Plugin initialization logic
client.filter({ job: 'sheet:myCustomOperation' }, (configure) => {
configure.on('job:ready', async (event) => {
// Plugin functionality
const { jobId } = event.context
try {
await client.jobs.ack(jobId, {
info: 'Running custom operation...'
})
// Your custom logic here
await performCustomOperation(options)
await client.jobs.complete(jobId, {
outcome: { message: 'Custom operation completed successfully' }
})
} catch (error) {
await client.jobs.fail(jobId, {
outcome: { message: `Operation failed: ${error.message}` }
})
}
})
})
}
}
// Usage
client.use(myCustomPlugin({ customOption: 'value' }))
```
### Plugin Best Practices
1. **Single Responsibility**: Each plugin should handle one specific functionality
2. **Configuration**: Accept configuration options for flexibility
3. **Error Handling**: Implement robust error handling and logging
4. **Documentation**: Provide clear documentation and examples
5. **Testing**: Include comprehensive tests for your plugin
## Popular Plugin Combinations
### Basic Data Import
```typescript
// Essential plugins for most data import scenarios
client.use(autocast())
client.use(recordHook('sheet-name', validateRecord))
client.use(dedupePlugin({ sheetSlug: 'sheet-name' }))
```
### Advanced Validation
```typescript
// Comprehensive data validation
client.use(autocast())
client.use(recordHook('customers', validateCustomer))
client.use(validateEmail({ sheetSlug: 'customers', fieldKey: 'email' }))
client.use(validatePhone({ sheetSlug: 'customers', fieldKey: 'phone' }))
client.use(validateDate({ sheetSlug: 'customers', fieldKey: 'birthdate' }))
```
### Schema Conversion
```typescript
// Convert external schemas to Flatfile
client.use(convertJsonSchema({ source: 'api-schema.json' }))
client.use(convertOpenApiSchema({ source: 'swagger.yaml' }))
```
### Data Enrichment
```typescript
// Enrich data with external services
client.use(enrichGeocode({
sheetSlug: 'addresses',
addressField: 'full_address'
}))
client.use(enrichSentiment({
sheetSlug: 'feedback',
textField: 'comments'
}))
```
## Plugin Repository
All Flatfile plugins are open source and available on GitHub:
* **Main Repository**: [github.com/FlatFilers/flatfile-plugins](https://github.com/FlatFilers/flatfile-plugins)
* **NPM Organization**: [@flatfile](https://www.npmjs.com/org/flatfile)
* **Plugin Marketplace**: [flatfile.com/plugins](https://flatfile.com/plugins/)
## Contributing
We welcome contributions to the Flatfile plugin ecosystem:
1. **Bug Reports**: Report issues on the GitHub repository
2. **Feature Requests**: Suggest new plugins or enhancements
3. **Pull Requests**: Contribute code improvements or new plugins
4. **Documentation**: Help improve plugin documentation
## Support
* **Documentation**: Each plugin includes detailed documentation
* **Community**: Join our [Slack community](https://flatfile.com/join-slack/) for support
* **GitHub Issues**: Report bugs or request features on GitHub
* **Professional Support**: Contact our team for enterprise support
# ISBN Validator Plugin
Source: https://flatfile.com/docs/plugins/isbn
A Flatfile plugin that validates and formats International Standard Book Number (ISBN) data during the import process, supporting both ISBN-10 and ISBN-13 standards with automatic formatting and conversion capabilities.
The ISBN Validator plugin for Flatfile is designed to ensure the quality and consistency of International Standard Book Number (ISBN) data during the import process. It automatically validates fields containing ISBNs against both ISBN-10 and ISBN-13 standards, checking for correct structure and valid check digits.
Key features include the ability to automatically format valid ISBNs with hyphens for better readability and to convert ISBNs between formats (e.g., from ISBN-10 to ISBN-13). The plugin provides clear, field-level error messages for invalid ISBNs and informational messages for successful transformations. It is highly configurable, allowing users to specify which sheets and fields to target, making it a flexible tool for any data onboarding workflow involving book information.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-validate-isbn
```
## Configuration & Parameters
The plugin accepts a configuration object with the following parameters:
### `sheetSlug`
* **Type:** `string`
* **Default:** `'**'`
* **Description:** The slug of the sheet to apply the validation to. The default `'**'` applies it to all sheets in the workspace.
### `isbnFields`
* **Type:** `string[]`
* **Default:** `['isbn']`
* **Description:** An array of field keys that contain ISBN values to be validated.
### `autoFormat`
* **Type:** `boolean`
* **Default:** `true`
* **Description:** If set to true, the plugin will automatically format valid ISBNs with hyphens (e.g., '9780306406157' becomes '978-0-306-40615-7').
### `format`
* **Type:** `string` (optional)
* **Default:** `undefined`
* **Description:** If specified, the plugin will attempt to convert the ISBN to the given format. Possible values are `'isbn13'`, `'isbn13h'` (with hyphens), `'isbn10'`, and `'isbn10h'` (with hyphens).
## Usage Examples
### Basic Usage
```javascript
import { FlatfileListener } from "@flatfile/listener";
import validateISBN from "@flatfile/plugin-validate-isbn";
const listener = new FlatfileListener();
// Use the plugin with default settings
// Validates the 'isbn' field on all sheets and auto-formats it.
listener.use(validateISBN());
```
```typescript
import { FlatfileListener } from "@flatfile/listener";
import validateISBN from "@flatfile/plugin-validate-isbn";
const listener = new FlatfileListener();
// Use the plugin with default settings
// Validates the 'isbn' field on all sheets and auto-formats it.
listener.use(validateISBN());
```
### Custom Configuration
```javascript
import { FlatfileListener } from "@flatfile/listener";
import validateISBN from "@flatfile/plugin-validate-isbn";
const listener = new FlatfileListener();
// Use the plugin with custom configuration
listener.use(
validateISBN({
sheetSlug: "books",
isbnFields: ["primary_isbn", "secondary_isbn"],
autoFormat: true,
format: "isbn13h", // Convert all valid ISBNs to hyphenated ISBN-13
})
);
```
```typescript
import { FlatfileListener } from "@flatfile/listener";
import validateISBN from "@flatfile/plugin-validate-isbn";
const listener = new FlatfileListener();
// Use the plugin with custom configuration
listener.use(
validateISBN({
sheetSlug: "books",
isbnFields: ["primary_isbn", "secondary_isbn"],
autoFormat: true,
format: "isbn13h", // Convert all valid ISBNs to hyphenated ISBN-13
})
);
```
### Standalone Validation Function
```javascript
// Using the exported utility function for standalone validation
import { validateAndFormatISBN } from "@flatfile/plugin-validate-isbn";
const isbnToTest = "0306406152";
// Validate and convert an ISBN-10 to a hyphenated ISBN-13
const result = validateAndFormatISBN(isbnToTest, false, "isbn13h");
if (result.isValid) {
console.log("Validation successful!");
console.log("Converted ISBN:", result.convertedISBN); // Output: 978-0-306-40615-7
} else {
console.error("Validation failed:", result.message);
}
```
```typescript
// Using the exported utility function for standalone validation
import { validateAndFormatISBN } from "@flatfile/plugin-validate-isbn";
const isbnToTest = "0306406152";
// Validate and convert an ISBN-10 to a hyphenated ISBN-13
const result = validateAndFormatISBN(isbnToTest, false, "isbn13h");
if (result.isValid) {
console.log("Validation successful!");
console.log("Converted ISBN:", result.convertedISBN); // Output: 978-0-306-40615-7
} else {
console.error("Validation failed:", result.message);
}
```
## Troubleshooting
Common troubleshooting steps include:
1. **Sheet Configuration:** Verify that the `sheetSlug` in the configuration exactly matches the slug of the target sheet in your Flatfile Space.
2. **Field Mapping:** Ensure the strings in the `isbnFields` array match the field keys in your sheet's data model.
3. **Installation:** Confirm that the plugin is correctly installed (`npm install @flatfile/plugin-validate-isbn`) and imported.
## Notes
### Default Behavior
By default, the plugin runs on all sheets, targets the field with the key 'isbn', and automatically formats any valid ISBNs by adding hyphens. It does not perform any format conversion unless the `format` option is explicitly set.
### Dependencies
This plugin has an external dependency on the `isbn3` library, which handles the core validation and conversion logic. The plugin operates using `recordHook`, which is triggered by the `commit:created` event. This means validation runs after a user submits their data but before the commit is finalized.
### Error Handling
The plugin follows standard Flatfile error handling patterns:
* For validation failures, it uses `record.addError(field, message)` to attach a blocking error to the specific field.
* For successful formatting or conversion, it uses `record.addInfo(field, message)` to provide non-blocking feedback to the user.
* If a configured ISBN field is empty or null for a given record, the plugin skips it without adding an error.
# Job Handler Plugin
Source: https://flatfile.com/docs/plugins/job-handler
A Flatfile plugin that streamlines the handling of asynchronous Jobs by managing their lifecycle, progress reporting, and completion status.
The Job Handler plugin is designed to streamline the handling of Flatfile Jobs. A Job is a large unit of work performed asynchronously, such as processing a file or configuring a Space. This plugin listens for the `job:ready` event and executes a custom handler function when a matching job is triggered.
Its main purpose is to provide a structured way to manage the lifecycle of a job. It automatically acknowledges the job upon receipt, provides a `tick` function to report progress back to the user, and handles the final completion or failure status. This is useful for any long-running process initiated within Flatfile, allowing developers to focus on the business logic of the task while the plugin manages the communication with the Flatfile API.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-job-handler
```
## Configuration & Parameters
### job
* **Type:** `string | EventFilter`
* **Required:** Yes
* **Description:** A filter to specify which job the listener should handle. This is typically a string in the format "domain:operation", for example, "space:configure" or "workbook:submit".
### handler
* **Type:** `Function`
* **Required:** Yes
* **Description:** An async callback function that contains the logic to be executed for the job. It receives the `event` object and a `tick` function as arguments. Throwing an error within this function will cause the job to fail. If it completes successfully, it can optionally return a `JobCompleteDetails` object to customize the outcome message.
**Function signature:**
```typescript
(event: FlatfileEvent, tick: TickFunction) => Promise
```
### opts.debug
* **Type:** `boolean`
* **Default:** `false`
* **Description:** An optional parameter to enable detailed logging for the plugin in the console, which is useful for troubleshooting.
### Default Behavior
By default, the plugin listens for the specified `job:ready` event. When triggered, it immediately acknowledges the job with the Flatfile API, setting its status to "Accepted" and progress to 0%. It then executes the user-provided `handler`. If the handler completes without returning a value, the plugin marks the job as complete with a default message "Job complete". If the handler throws an error, the plugin catches it and marks the job as failed, using the error message as the reason.
## Usage Examples
### Basic Usage
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { jobHandler } from '@flatfile/plugin-job-handler';
const listener = new FlatfileListener();
listener.use(
jobHandler('space:configure', async (event, tick) => {
// Acknowledge the job has started
await tick(10, 'Starting configuration...');
// ... your custom logic to configure the space ...
console.log('Configuring space:', event.context.spaceId);
// Update progress
await tick(50, 'Halfway there!');
// ... more logic ...
// Job is finished
await tick(100, 'Configuration complete.');
})
);
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { jobHandler } from '@flatfile/plugin-job-handler';
import type { FlatfileEvent } from '@flatfile/listener';
const listener = new FlatfileListener();
listener.use(
jobHandler('space:configure', async (event: FlatfileEvent, tick) => {
// Acknowledge the job has started
await tick(10, 'Starting configuration...');
// ... your custom logic to configure the space ...
console.log('Configuring space:', event.context.spaceId);
// Update progress
await tick(50, 'Halfway there!');
// ... more logic ...
// Job is finished
await tick(100, 'Configuration complete.');
})
);
```
### Configuration with Debug
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { jobHandler } from '@flatfile/plugin-job-handler';
const listener = new FlatfileListener();
// Using the 'opts' parameter to enable debug logging
listener.use(
jobHandler(
'workbook:submit',
async (event) => {
console.log('Processing submitted workbook...');
// ... processing logic ...
},
{ debug: true }
)
);
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { jobHandler } from '@flatfile/plugin-job-handler';
import type { FlatfileEvent } from '@flatfile/listener';
const listener = new FlatfileListener();
// Using the 'opts' parameter to enable debug logging
listener.use(
jobHandler(
'workbook:submit',
async (event: FlatfileEvent) => {
console.log('Processing submitted workbook...');
// ... processing logic ...
},
{ debug: true }
)
);
```
### Advanced Usage with Custom Outcomes
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { jobHandler } from '@flatfile/plugin-job-handler';
const listener = new FlatfileListener();
listener.use(
jobHandler('file:process', async (event, tick) => {
try {
await tick(25, 'Processing file...');
// ... your processing logic ...
// Return a custom outcome on success
return {
outcome: {
message: 'File processed successfully. 10 new records created.',
acknowledge: true,
},
};
} catch (error) {
console.error('An error occurred:', error);
// Throwing the error will automatically fail the job
throw new Error('Failed to process the file.');
}
})
);
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { jobHandler } from '@flatfile/plugin-job-handler';
import type { FlatfileEvent } from '@flatfile/listener';
import type { Flatfile } from '@flatfile/api';
const listener = new FlatfileListener();
listener.use(
jobHandler('file:process', async (event: FlatfileEvent, tick): Promise => {
try {
await tick(25, 'Processing file...');
// ... your processing logic ...
// Return a custom outcome on success
return {
outcome: {
message: 'File processed successfully. 10 new records created.',
acknowledge: true,
},
};
} catch (error) {
console.error('An error occurred:', error);
// Throwing the error will automatically fail the job
throw new Error('Failed to process the file.');
}
})
);
```
### Error Handling Example
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { jobHandler } from '@flatfile/plugin-job-handler';
const listener = new FlatfileListener();
listener.use(
jobHandler('data:validate', async (event, tick) => {
await tick(1, 'Starting validation...');
const records = await getRecords(event); // Fictional function
if (records.length === 0) {
// This will be caught by the plugin and fail the job
throw new Error('Validation failed: No records found to validate.');
}
// ... continue validation ...
})
);
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { jobHandler } from '@flatfile/plugin-job-handler';
import type { FlatfileEvent } from '@flatfile/listener';
const listener = new FlatfileListener();
listener.use(
jobHandler('data:validate', async (event: FlatfileEvent, tick) => {
await tick(1, 'Starting validation...');
const records = await getRecords(event); // Fictional function
if (records.length === 0) {
// This will be caught by the plugin and fail the job
throw new Error('Validation failed: No records found to validate.');
}
// ... continue validation ...
})
);
```
## Troubleshooting
The main tool for troubleshooting is the `debug` option. Setting `opts.debug = true` when calling `jobHandler` will enable verbose logging from the plugin, showing when a job is received and when it is completed. Fatal errors are logged using `console.error`, which can be monitored in your server logs.
## Notes
### Requirements and Considerations
This plugin is designed to be used within a Flatfile listener environment, such as a Node.js server running the `@flatfile/listener` package. It relies on the Flatfile API to update job statuses, so proper API keys and permissions are required for the environment where the listener is running.
### Error Handling Patterns
The primary error handling pattern is to wrap the logic inside the `handler` function in a `try...catch` block if you need to perform cleanup actions, or to simply `throw` an error to immediately fail the job. The plugin automatically handles the API call to `api.jobs.fail()` when an unhandled exception is thrown from the `handler`.
### Function Signatures
```typescript
function jobHandler(
job: string | { job: string },
handler: (event: FlatfileEvent, tick: TickFunction) => Promise,
opts?: { debug?: boolean }
): (listener: FlatfileListener) => void
type TickFunction = (progress: number, info?: string) => Promise
```
### Return Values
* The `jobHandler` function returns a plugin function to be passed to `listener.use()`
* The `handler` function can return either `void` or a `Flatfile.JobCompleteDetails` object
* If `void` is returned, the job is completed with a default success message
* If a `JobCompleteDetails` object is returned, it is used to set a custom outcome for the completed job
* The `tick` function returns a `Promise` which can be awaited but is not required
# JSON Extractor
Source: https://flatfile.com/docs/plugins/json-extractor
Parse JSON and JSON Lines files uploaded to Flatfile and extract data into Sheets within a Workbook
The JSON Extractor plugin is designed to parse JSON (`.json`) and JSON Lines (`.jsonl`, `.jsonlines`) files uploaded to Flatfile. It automatically detects the file type and extracts the data into one or more Sheets within a Flatfile Workbook.
The primary use case is to allow users to upload structured JSON data and have it seamlessly transformed into a tabular format for review and import. The plugin can handle two main JSON structures:
1. A JSON object where each top-level key is a sheet name and its value is an array of objects (records). This creates a multi-sheet Workbook.
2. A single JSON array of objects. This creates a single-sheet Workbook with the default sheet name "Sheet1".
The plugin also intelligently handles nested JSON objects by flattening them into a single-level structure, using dot notation for headers (e.g., an object `{ "address": { "city": "..." } }` becomes a column named `Address.City`). This plugin is intended to be used in a server-side listener.
## Installation
Install the JSON Extractor plugin using npm:
```bash
npm install @flatfile/plugin-json-extractor
```
## Configuration & Parameters
The `JSONExtractor` function accepts an optional options object with the following properties:
| Parameter | Type | Default | Description |
| ----------- | --------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------- |
| `chunkSize` | `number` | `10000` | Controls the number of records to process in each batch or "chunk" when inserting data into Flatfile |
| `parallel` | `number` | `1` | Controls how many chunks are processed concurrently. Increasing this can speed up the import of very large files but may use more server resources |
| `debug` | `boolean` | `false` | If set to true, enables more verbose logging for debugging purposes |
### Default Behavior
If no options are provided, the plugin processes files by inserting data in chunks of 10,000 records, with only one chunk being processed at a time. Debug logging is disabled.
## Usage Examples
### Basic Usage
This example shows how to add the JSON extractor to a Flatfile listener with its default settings.
```javascript JavaScript
import { listener } from '@flatfile/listener';
import { JSONExtractor } from '@flatfile/plugin-json-extractor';
listener.use(JSONExtractor());
```
```typescript TypeScript
import { listener } from '@flatfile/listener';
import { JSONExtractor } from '@flatfile/plugin-json-extractor';
listener.use(JSONExtractor());
```
### Configuration Example
This example configures the extractor to use a smaller chunk size and process two chunks in parallel.
```javascript JavaScript
import { listener } from '@flatfile/listener';
import { JSONExtractor } from '@flatfile/plugin-json-extractor';
listener.use(
JSONExtractor({
chunkSize: 5000,
parallel: 2,
})
);
```
```typescript TypeScript
import { listener } from '@flatfile/listener';
import { JSONExtractor } from '@flatfile/plugin-json-extractor';
listener.use(
JSONExtractor({
chunkSize: 5000,
parallel: 2,
})
);
```
### Advanced Usage (Standalone Parser)
This example shows how to use the exported `jsonParser` function directly to parse a JSON file buffer outside of the Flatfile listener context.
```javascript JavaScript
import * as fs from 'fs';
import { jsonParser } from '@flatfile/plugin-json-extractor';
const fileBuffer = fs.readFileSync('path/to/my-data.json');
const workbookData = jsonParser(fileBuffer);
console.log(workbookData);
// Output: { Sheet1: { headers: [...], data: [...] } }
```
```typescript TypeScript
import * as fs from 'fs';
import { jsonParser } from '@flatfile/plugin-json-extractor';
const fileBuffer = fs.readFileSync('path/to/my-data.json');
const workbookData = jsonParser(fileBuffer);
console.log(workbookData);
// Output: { Sheet1: { headers: [...], data: [...] } }
```
## API Reference
### JSONExtractor
This is the main factory function for the plugin. It returns a configured extractor instance that can be registered with a Flatfile listener. The extractor listens for `.json`, `.jsonl`, and `.jsonlines` file uploads, parses them, and creates the corresponding Sheets and Records in Flatfile.
**Parameters:**
* `options` (optional): `PluginOptions`
* `chunkSize` (number): Records per chunk. Default: 10000
* `parallel` (number): Number of chunks to process in parallel. Default: 1
* `debug` (boolean): Enable verbose logging. Default: false
**Return Value:**
Returns a Flatfile `Extractor` instance, which is a type of listener middleware.
**Usage Example:**
```javascript JavaScript
import { listener } from '@flatfile/listener';
import { JSONExtractor } from '@flatfile/plugin-json-extractor';
// Register the plugin with default options
listener.use(JSONExtractor());
// Or with custom options
listener.use(JSONExtractor({ chunkSize: 1000, parallel: 5 }));
```
```typescript TypeScript
import { listener } from '@flatfile/listener';
import { JSONExtractor } from '@flatfile/plugin-json-extractor';
// Register the plugin with default options
listener.use(JSONExtractor());
// Or with custom options
listener.use(JSONExtractor({ chunkSize: 1000, parallel: 5 }));
```
**Error Handling:**
The extractor is built on the `@flatfile/util-extractor` utility, which automatically handles job lifecycle events. If the parser throws an error (e.g., due to malformed JSON), the utility will catch it and fail the corresponding job in Flatfile, providing feedback to the user in the UI.
### jsonParser
A standalone function that performs the core logic of parsing a file buffer into a `WorkbookCapture` object. This is useful for testing or for use cases where parsing is needed without the full listener integration. It handles both standard JSON and JSONL formats.
**Parameters:**
* `buffer`: `Buffer` - A Node.js Buffer containing the raw file content
* `options` (optional): `{ readonly fileExt?: string }`
* `fileExt` (string): Can be set to 'jsonl' to explicitly trigger JSON Lines parsing logic, even if the file name is not available
**Return Value:**
Returns a `WorkbookCapture` object, which has sheet names as keys and `SheetCapture` objects as values.
Example: `{ "MySheet": { headers: ["id", "name"], data: [{ id: {value: 1}, name: {value: "Test"} }] } }`
**Usage Example:**
```javascript JavaScript
import * as fs from 'fs';
import { jsonParser } from '@flatfile/plugin-json-extractor';
// For a standard JSON array file
const jsonBuffer = Buffer.from('[{"id": 1, "name": "Alice"}]');
const workbook = jsonParser(jsonBuffer);
// workbook -> { Sheet1: { headers: ['id', 'name'], data: [...] } }
// For a JSONL file
const jsonlBuffer = Buffer.from('{"id": 1}\n{"id": 2}');
const workbookL = jsonParser(jsonlBuffer, { fileExt: 'jsonl' });
// workbookL -> { Sheet1: { headers: ['id'], data: [...] } }
```
```typescript TypeScript
import * as fs from 'fs';
import { jsonParser } from '@flatfile/plugin-json-extractor';
// For a standard JSON array file
const jsonBuffer = Buffer.from('[{"id": 1, "name": "Alice"}]');
const workbook = jsonParser(jsonBuffer);
// workbook -> { Sheet1: { headers: ['id', 'name'], data: [...] } }
// For a JSONL file
const jsonlBuffer = Buffer.from('{"id": 1}\n{"id": 2}');
const workbookL = jsonParser(jsonlBuffer, { fileExt: 'jsonl' });
// workbookL -> { Sheet1: { headers: ['id'], data: [...] } }
```
**Error Handling:**
* If the buffer contains fundamentally invalid JSON, the function will throw a `SyntaxError`
* For JSONL files, it will skip any individual lines that are not valid JSON, log an error to the console for each invalid line, and continue processing the rest of the file
* If the input data is not an object or array (e.g., a simple string or number), it will log an error and return an empty object
## Troubleshooting
* **File not being processed**: Ensure it has a `.json` or `.jsonl` extension and that the listener is correctly configured with `listener.use(JSONExtractor())`
* **Data appears in wrong sheet or not at all**: Check the root structure of your JSON file. It must be either an array of objects or an object of arrays
* **Some rows from JSONL file are missing**: Check the server logs for "Invalid JSON line" errors to identify and correct malformed lines in the source file
## Notes
### Special Considerations
* **Deployment**: This plugin is designed to run in a server-side environment as part of a Flatfile listener
* **Supported File Types**: The plugin automatically triggers for files with extensions `.json`, `.jsonl`, and `.jsonlines`
* **Data Structure for Multi-Sheet Extraction**: To create multiple sheets from a single file, the root of the JSON must be an object where each key is a string (the sheet name) and each value is an array of uniform objects
* **Data Structure for Single-Sheet Extraction**: If the root of the JSON is an array of objects, the plugin will create a single sheet named "Sheet1"
* **Nested Objects**: Nested objects are flattened into columns using dot notation. For example, `{ "user": { "name": "John" } }` will result in a column named `user.name`
### Error Handling Patterns
* The core `parseBuffer` function is wrapped in a try/catch block. On a fatal parsing error (like malformed JSON), it re-throws the error, which is then handled by the extractor utility to fail the job
* For JSONL files, the parser processes the file line-by-line. If a line contains invalid JSON, it is skipped, an error is logged to the console, and processing continues with the next valid line. This makes it resilient to partially corrupted JSONL files
# JSON Schema Converter
Source: https://flatfile.com/docs/plugins/json-schema
Automatically transform JSON Schema into Flatfile Blueprint for streamlined data model setup
The JSON Schema Converter plugin for Flatfile automatically transforms a JSON Schema into a Flatfile Blueprint. A Blueprint is Flatfile's Data Definition Language (DDL) used for defining data models, validations, and transformations.
The primary purpose of this plugin is to streamline the setup of a Flatfile Space. Instead of manually defining each field for your Sheets, you can provide a JSON Schema, and the plugin will generate the corresponding Workbooks and Sheets configuration. This is particularly useful for projects that already use JSON Schema for data validation or API definitions, allowing for a single source of truth for data structures.
The plugin is designed to be used in a server-side Flatfile listener, typically on the `space:configure` event. It can fetch schemas from a URL, use a direct JSON object, or execute a function to retrieve the schema dynamically.
## Installation
```bash
npm install @flatfile/plugin-convert-json-schema
```
## Configuration & Parameters
The plugin is configured via a single object passed to the `configureSpaceWithJsonSchema` function. This object is of type `JsonSetupFactory`.
### JsonSetupFactory
| Parameter | Type | Required | Description |
| ----------- | -------------------------------------- | -------- | ------------------------------------------------------------------- |
| `workbooks` | `PartialWorkbookConfig[]` | Yes | An array of workbook configurations to be created in the Space |
| `space` | `Partial` | No | Configuration details for the Space itself, like metadata or themes |
### PartialWorkbookConfig
| Parameter | Type | Required | Description |
| --------- | ---------------------- | -------- | ---------------------------------------------------- |
| `name` | `string` | Yes | The name of the Workbook |
| `sheets` | `PartialSheetConfig[]` | Yes | An array of sheet configurations within the Workbook |
| `actions` | `Flatfile.Action[]` | No | Actions available at the Workbook level |
### PartialSheetConfig
| Parameter | Type | Required | Description |
| --------- | ------------------------------------------------------- | -------- | ------------------------------------------------------------------------------------------------------------------------------------------------------------ |
| `source` | `object \| string \| (() => object \| Promise)` | Yes | The JSON Schema for the sheet. Can be a direct JSON Schema object, a string URL pointing to a schema file, or an async function that returns a schema object |
| `name` | `string` | No | The name of the Sheet. Defaults to the `title` property from the root of the JSON Schema if not provided |
| `slug` | `string` | No | A unique identifier for the sheet |
| `actions` | `Flatfile.Action[]` | No | Actions available at the Sheet level |
### Default Behavior
By default, the plugin will create Workbooks and Sheets as defined in the `workbooks` array. For each sheet, it parses the `source` JSON schema, converting its `properties` into Flatfile fields. Nested objects in the schema are flattened into fields with names like `parentKey_childKey`. The sheet's name defaults to the schema's `title` if not explicitly provided.
## Usage Examples
### Basic Usage with URL Schema
```javascript JavaScript
import { configureSpaceWithJsonSchema } from "@flatfile/plugin-convert-json-schema";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener) {
listener.use(
configureSpaceWithJsonSchema({
workbooks: [
{
name: "JSON Schema Workbook",
sheets: [
{
name: "Person Sheet",
source: "https://example.com/schemas/person.json",
},
],
},
],
})
);
}
```
```typescript TypeScript
import { configureSpaceWithJsonSchema } from "@flatfile/plugin-convert-json-schema";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener: FlatfileListener) {
listener.use(
configureSpaceWithJsonSchema({
workbooks: [
{
name: "JSON Schema Workbook",
sheets: [
{
name: "Person Sheet",
source: "https://example.com/schemas/person.json",
},
],
},
],
})
);
}
```
### Local Schema with Callback
```javascript JavaScript
import { configureSpaceWithJsonSchema } from "@flatfile/plugin-convert-json-schema";
import { FlatfileListener } from "@flatfile/listener";
import api from "@flatfile/api";
export default function (listener) {
const personSchema = {
title: "Person",
type: "object",
properties: {
firstName: { type: "string" },
lastName: { type: "string" },
age: { type: "integer", minimum: 0 },
},
};
const callback = async (event, workbookIds, tick) => {
const { spaceId } = event.context;
await api.documents.create(spaceId, {
title: "Welcome",
body: "Welcome to your new Space! ",
});
await tick(100, "Space setup complete");
};
listener.use(
configureSpaceWithJsonSchema(
{
workbooks: [
{
name: "My Workbook",
sheets: [{ source: personSchema }],
},
],
},
callback
)
);
}
```
```typescript TypeScript
import { configureSpaceWithJsonSchema } from "@flatfile/plugin-convert-json-schema";
import { FlatfileListener, FlatfileEvent } from "@flatfile/listener";
import { TickFunction } from "@flatfile/plugin-job-handler";
import api from "@flatfile/api";
export default function (listener: FlatfileListener) {
const personSchema = {
title: "Person",
type: "object",
properties: {
firstName: { type: "string" },
lastName: { type: "string" },
age: { type: "integer", minimum: 0 },
},
};
const callback = async (event: FlatfileEvent, workbookIds: string[], tick: TickFunction) => {
const { spaceId } = event.context;
await api.documents.create(spaceId, {
title: "Welcome",
body: "Welcome to your new Space! ",
});
await tick(100, "Space setup complete");
};
listener.use(
configureSpaceWithJsonSchema(
{
workbooks: [
{
name: "My Workbook",
sheets: [{ source: personSchema }],
},
],
},
callback
)
);
}
```
### Dynamic Schema from Function
```javascript JavaScript
import { configureSpaceWithJsonSchema, fetchExternalReference } from "@flatfile/plugin-convert-json-schema";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener) {
listener.use(
configureSpaceWithJsonSchema({
workbooks: [
{
name: "Dynamic Workbook",
sheets: [
{
name: "Product Sheet",
source: async () => {
// Imagine custom logic here, e.g., adding auth headers
const schemaUrl = "https://api.my-service.com/schemas/product.json";
return await fetchExternalReference(schemaUrl);
},
},
],
},
],
})
);
}
```
```typescript TypeScript
import { configureSpaceWithJsonSchema, fetchExternalReference } from "@flatfile/plugin-convert-json-schema";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener: FlatfileListener) {
listener.use(
configureSpaceWithJsonSchema({
workbooks: [
{
name: "Dynamic Workbook",
sheets: [
{
name: "Product Sheet",
source: async () => {
// Imagine custom logic here, e.g., adding auth headers
const schemaUrl = "https://api.my-service.com/schemas/product.json";
return await fetchExternalReference(schemaUrl);
},
},
],
},
],
})
);
}
```
### Using fetchExternalReference with Error Handling
```javascript JavaScript
import { fetchExternalReference } from "@flatfile/plugin-convert-json-schema";
try {
const schema = await fetchExternalReference("https://example.com/schemas/product.json");
// Use schema
} catch (error) {
console.error("Failed to fetch schema:", error.message);
// Handle the error, perhaps by falling back to a default schema
}
```
```typescript TypeScript
import { fetchExternalReference } from "@flatfile/plugin-convert-json-schema";
try {
const schema = await fetchExternalReference("https://example.com/schemas/product.json");
// Use schema
} catch (error) {
console.error("Failed to fetch schema:", error.message);
// Handle the error, perhaps by falling back to a default schema
}
```
## Troubleshooting
### Job Fails on `space:configure`
Check the Job status in the Flatfile Dashboard for error messages. Common causes include:
* An invalid URL was provided for a schema `source`
* The schema at the URL is not valid JSON
* A `$ref` in the schema points to a location that cannot be resolved
### Fields Not Appearing as Expected
* Ensure the JSON schema has a `properties` object at the level you expect fields to be generated from
* Nested objects are flattened with an underscore (`_`) separator. For example, an `address` object with a `street` property becomes a field with the key `address_street`
* Check the data types in your schema. Supported types include `string`, `number`, `integer`, `boolean`, and `array` (which becomes `string-list`). `enum` is also supported. Unrecognized types will be ignored
## Notes
### Special Considerations
* This plugin is intended for use in server-side listeners, specifically for the `space:configure` event
* The plugin handles JSON schema references (`$ref`). It can resolve local references within the same schema (e.g., `#/definitions/address`) and external references to other files (e.g., `common.json#/definitions/name`). External references are resolved relative to the `$id` property of the schema they are in
* The plugin makes API calls on behalf of the user, including `api.spaces.update` and `api.workbooks.create`
### Error Handling
Errors during schema fetching (`fetchExternalReference`) or reference resolution will bubble up. When used within a Flatfile listener, these unhandled exceptions will cause the associated Job to fail, which is the standard error handling pattern in the Flatfile ecosystem. The Job's execution history will contain details about the error.
It is the user's responsibility to ensure that the provided JSON schemas are valid and that any URLs are accessible from the server environment where the listener is running.
# Markdown File Extractor
Source: https://flatfile.com/docs/plugins/markdown
Automatically parse Markdown files and extract tables into Flatfile Sheets
The Markdown Extractor plugin is designed to automatically parse Markdown files (`.md`) uploaded to Flatfile. Its primary purpose is to find and extract any tables present in the markdown content. For each table it discovers, the plugin creates a new Sheet within the Flatfile Space. The table's header row is used to define the fields (columns) of the Sheet, and the subsequent rows are converted into records. This is useful for importing data that is documented or stored in Markdown files, such as in project readmes, wikis, or technical documentation.
## Installation
Install the Markdown Extractor plugin using npm:
```bash
npm install @flatfile/plugin-markdown-extractor
```
## Configuration & Parameters
The plugin accepts the following configuration options:
### maxTables
* **Type:** `number`
* **Default:** `Infinity`
* **Description:** Sets the maximum number of tables to extract from a single Markdown file. By default, it will extract all tables it finds.
### errorHandling
* **Type:** `'strict' | 'lenient'`
* **Default:** `'lenient'`
* **Description:** Controls how parsing errors are handled:
* `'strict'`: The plugin will throw an error and stop processing if it encounters a malformed table (e.g., a data row with a different number of columns than the header)
* `'lenient'`: The plugin will log a warning to the console, skip the problematic table, and continue processing the rest of the file
### debug
* **Type:** `boolean`
* **Default:** `false`
* **Description:** When set to true, enables verbose logging to the console. This is useful for troubleshooting issues with file parsing, as it shows the content being parsed, tables found, and the final normalized data.
## Usage Examples
### Basic Usage
```javascript JavaScript
import { listener } from '@flatfile/listener';
import { MarkdownExtractor } from '@flatfile/plugin-markdown-extractor';
export default function(listener) {
listener.use(MarkdownExtractor());
}
```
```typescript TypeScript
import { listener } from '@flatfile/listener';
import { MarkdownExtractor } from '@flatfile/plugin-markdown-extractor';
export default function(listener: any) {
listener.use(MarkdownExtractor());
}
```
### Configuration Example
```javascript JavaScript
import { listener } from '@flatfile/listener';
import { MarkdownExtractor } from '@flatfile/plugin-markdown-extractor';
export default function(listener) {
const options = {
maxTables: 5,
errorHandling: 'strict',
debug: true
};
listener.use(MarkdownExtractor(options));
}
```
```typescript TypeScript
import { listener } from '@flatfile/listener';
import { MarkdownExtractor } from '@flatfile/plugin-markdown-extractor';
export default function(listener: any) {
const options = {
maxTables: 5,
errorHandling: 'strict' as const,
debug: true
};
listener.use(MarkdownExtractor(options));
}
```
### Direct Parser Usage
```javascript JavaScript
import * as fs from 'fs';
import { markdownParser } from '@flatfile/plugin-markdown-extractor';
const markdownContent = '| ID | Name |\n|----|------|\n| 1 | Test |';
const buffer = Buffer.from(markdownContent, 'utf-8');
const workbookData = markdownParser(buffer, { errorHandling: 'lenient' });
console.log(JSON.stringify(workbookData, null, 2));
```
```typescript TypeScript
import * as fs from 'fs';
import { markdownParser } from '@flatfile/plugin-markdown-extractor';
const markdownContent = '| ID | Name |\n|----|------|\n| 1 | Test |';
const buffer = Buffer.from(markdownContent, 'utf-8');
const workbookData = markdownParser(buffer, { errorHandling: 'lenient' });
console.log(JSON.stringify(workbookData, null, 2));
```
## API Reference
### MarkdownExtractor(options?)
The main factory function used to create and configure the markdown extractor plugin for use with a Flatfile listener.
**Parameters:**
* `options` (optional): Configuration settings object
**Returns:** An `Extractor` instance that can be passed to `listener.use()`
**Example:**
```javascript JavaScript
import { listener } from '@flatfile/listener';
import { MarkdownExtractor } from '@flatfile/plugin-markdown-extractor';
export default function(myListener) {
// Use with default options
myListener.use(MarkdownExtractor());
// Use with custom options
myListener.use(MarkdownExtractor({ maxTables: 1, errorHandling: 'strict' }));
}
```
```typescript TypeScript
import { listener } from '@flatfile/listener';
import { MarkdownExtractor } from '@flatfile/plugin-markdown-extractor';
export default function(myListener: any) {
// Use with default options
myListener.use(MarkdownExtractor());
// Use with custom options
myListener.use(MarkdownExtractor({ maxTables: 1, errorHandling: 'strict' }));
}
```
### markdownParser(buffer, options)
A low-level function that directly parses a Buffer containing markdown content into a Flatfile `WorkbookCapture` object.
**Parameters:**
* `buffer`: A Node.js Buffer containing the UTF-8 encoded content of a markdown file
* `options`: Configuration settings object
**Returns:** A `WorkbookCapture` object, which is a map of sheet names to sheet data
**Example:**
```javascript JavaScript
import { markdownParser } from '@flatfile/plugin-markdown-extractor';
const markdownContent = '| ID | Name |\n|----|------|\n| 1 | Test |';
const buffer = Buffer.from(markdownContent, 'utf-8');
const workbookData = markdownParser(buffer, { errorHandling: 'lenient' });
console.log(JSON.stringify(workbookData, null, 2));
```
```typescript TypeScript
import { markdownParser } from '@flatfile/plugin-markdown-extractor';
const markdownContent = '| ID | Name |\n|----|------|\n| 1 | Test |';
const buffer = Buffer.from(markdownContent, 'utf-8');
const workbookData = markdownParser(buffer, { errorHandling: 'lenient' });
console.log(JSON.stringify(workbookData, null, 2));
```
## Troubleshooting
### Error Handling Example
This example demonstrates how the `errorHandling` option affects behavior when parsing a malformed table:
```javascript JavaScript
import { markdownParser } from '@flatfile/plugin-markdown-extractor';
// Table has a header with 2 columns but a data row with 3
const malformedContent = '| A | B |\n|---|---|\n| 1 | 2 | 3 |';
const buffer = Buffer.from(malformedContent, 'utf-8');
// 'strict' mode will throw an error
try {
markdownParser(buffer, { errorHandling: 'strict' });
} catch (e) {
console.error('Caught error in strict mode:', e.message);
//-> Caught error in strict mode: Data row length does not match header row length
}
// 'lenient' mode will not throw but will log a warning and skip the table
const result = markdownParser(buffer, { errorHandling: 'lenient' });
console.log('Result in lenient mode:', result);
// (A warning is logged to the console)
//-> Result in lenient mode: {}
```
```typescript TypeScript
import { markdownParser } from '@flatfile/plugin-markdown-extractor';
// Table has a header with 2 columns but a data row with 3
const malformedContent = '| A | B |\n|---|---|\n| 1 | 2 | 3 |';
const buffer = Buffer.from(malformedContent, 'utf-8');
// 'strict' mode will throw an error
try {
markdownParser(buffer, { errorHandling: 'strict' });
} catch (e: any) {
console.error('Caught error in strict mode:', e.message);
//-> Caught error in strict mode: Data row length does not match header row length
}
// 'lenient' mode will not throw but will log a warning and skip the table
const result = markdownParser(buffer, { errorHandling: 'lenient' });
console.log('Result in lenient mode:', result);
// (A warning is logged to the console)
//-> Result in lenient mode: {}
```
### Debug Mode
To diagnose parsing issues, enable debug mode:
```javascript JavaScript
listener.use(MarkdownExtractor({ debug: true }));
```
```typescript TypeScript
listener.use(MarkdownExtractor({ debug: true }));
```
## Notes
### Default Behavior
* The plugin extracts all tables found in a markdown file by default (`maxTables: Infinity`)
* Uses lenient error handling by default, skipping malformed tables and continuing processing
* Debug logging is disabled by default
### Special Considerations
* This plugin is intended for use in a server-side Flatfile listener
* It operates on files with the `.md` extension
* For each table found in a markdown file, a new Sheet is created with auto-generated names (`Table_1`, `Table_2`, etc.)
* The parser expects standard GitHub-flavored markdown table syntax
* Highly complex or non-standard tables may not be parsed correctly
### Error Handling Patterns
* **Lenient mode (default)**: Logs warnings for malformed tables and continues processing
* **Strict mode**: Throws errors immediately when encountering malformed tables, failing the entire file processing
### Troubleshooting Tips
* Set `debug: true` to enable detailed logging for diagnosing parsing issues
* Check console output for warnings about skipped tables in lenient mode
* Ensure markdown tables follow standard GitHub-flavored markdown syntax
* Verify that header and data rows have matching column counts
# Merge.dev Connection Plugin
Source: https://flatfile.com/docs/plugins/merge-connection
Connect Flatfile to the Merge.dev unified API platform to sync data from hundreds of third-party integrations including HRIS, ATS, CRM, and Accounting systems.
The Merge.dev Connection Plugin enables users to sync data from hundreds of third-party integrations directly into Flatfile through the Merge.dev unified API platform. When a user establishes a connection to a service via Merge within the Flatfile UI, the plugin automatically creates a new Flatfile Workbook with a schema that matches the data models for that integration. The plugin performs an initial data sync and allows users to manually trigger full data refreshes at any time.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-connect-via-merge
```
## Configuration & Parameters
This plugin is configured through Flatfile Secrets rather than code-based options.
### Required Secret
Your API key from your Merge.dev account. This secret must be created in your Flatfile Space and is used to authenticate with the Merge API for token exchange, schema fetching, and data synchronization.
**Default Behavior:** If the `MERGE_ACCESS_KEY` secret is not present or invalid, the plugin will fail during workbook creation or data sync, resulting in a failed Flatfile Job with an error message indicating the missing key.
### Function Signature
The plugin exports a single function:
```typescript
mergePlugin(): (listener: FlatfileListener) => void
```
**Parameters:** None
**Returns:** A function that registers job handlers with the Flatfile listener for:
* `space:createConnectedWorkbook`: Creates workbooks based on Merge schemas
* `workbook:syncConnectedWorkbook`: Syncs data from connected services
## Usage Examples
```javascript JavaScript
import { FlatfileListener } from "@flatfile/listener";
import { mergePlugin } from "@flatfile/plugin-connect-via-merge";
export default function (listener) {
// Add the Merge.dev plugin to the listener
listener.use(mergePlugin());
}
```
```typescript TypeScript
import { FlatfileListener } from "@flatfile/listener";
import { mergePlugin } from "@flatfile/plugin-connect-via-merge";
export default function (listener: FlatfileListener) {
// Add the Merge.dev plugin to the listener
listener.use(mergePlugin());
}
```
### Complete Setup Example
```javascript JavaScript
import { FlatfileListener } from "@flatfile/listener";
import { mergePlugin } from "@flatfile/plugin-connect-via-merge";
export default function (listener) {
// Register the Merge.dev plugin handlers
listener.use(mergePlugin());
// Add other listeners as needed
listener.on('**', (event) => {
console.log(`Event received: ${event.topic}`);
});
}
```
```typescript TypeScript
import { FlatfileListener } from "@flatfile/listener";
import { mergePlugin } from "@flatfile/plugin-connect-via-merge";
export default function (listener: FlatfileListener) {
// Register the Merge.dev plugin handlers
listener.use(mergePlugin());
// Add other listeners as needed
listener.on('**', (event) => {
console.log(`Event received: ${event.topic}`);
});
}
```
### Configuration Setup
1. **Create the Secret:** In your Flatfile Space settings, create a new Secret with the name `MERGE_ACCESS_KEY` and your Merge.dev API key as the value.
2. **Use the Plugin:** The plugin automatically looks for the `MERGE_ACCESS_KEY` secret in the Space where it's running.
```javascript JavaScript
import { FlatfileListener } from "@flatfile/listener";
import { mergePlugin } from "@flatfile/plugin-connect-via-merge";
export default function (listener) {
// The plugin automatically uses the 'MERGE_ACCESS_KEY' secret
listener.use(mergePlugin());
}
```
```typescript TypeScript
import { FlatfileListener } from "@flatfile/listener";
import { mergePlugin } from "@flatfile/plugin-connect-via-merge";
export default function (listener: FlatfileListener) {
// The plugin automatically uses the 'MERGE_ACCESS_KEY' secret
listener.use(mergePlugin());
}
```
## Troubleshooting
### Missing API Key Error
If you encounter errors related to missing Merge API keys, ensure that:
1. The `MERGE_ACCESS_KEY` secret is properly set in your Flatfile Space
2. The API key value is correct and active in your Merge.dev account
### Sync Timeout Issues
If sync operations timeout, the plugin polls Merge.dev for up to 5 minutes (30 attempts at 10-second intervals). If Merge doesn't complete its sync within this window, the job will fail with a timeout error.
### Job Failures
Check the Flatfile Job logs in your dashboard for specific error messages. The plugin provides descriptive error messages for common issues like missing credentials or API failures.
## Notes
### Prerequisites
* The `connections` feature flag must be enabled for your Flatfile account (contact [support@flatfile.com](mailto:support@flatfile.com))
* Active Merge.dev account required
* `MERGE_ACCESS_KEY` secret must be configured in your Flatfile Space
### Alpha Release Warning
This plugin is an alpha release. Functionality and APIs may change in future versions.
### Data Sync Behavior
* Each sync performs a **full refresh** of data
* All existing records are deleted before inserting current data
* Manual changes made in Flatfile will be overwritten on next sync
* This ensures data consistency with the source system
### Automatic Secret Management
The plugin automatically creates workbook-specific secrets named `:MERGE_X_ACCOUNT_TOKEN` to store account tokens for each connection. These are managed internally by the plugin.
### Error Handling Pattern
The plugin uses centralized error handling that:
* Logs original errors to the console for debugging
* Throws user-friendly error messages displayed in the Flatfile UI
* Causes failed jobs to show descriptive error messages in the dashboard
### Manual Sync Actions
After initial setup, the plugin automatically adds a "Sync" action to connected workbooks. Users can trigger this action from the Flatfile UI to refresh data without additional code.
# Number Validation Plugin
Source: https://flatfile.com/docs/plugins/number
Comprehensive validation for numeric data during the Flatfile import process with support for ranges, formats, and special number types.
The Number Validation Plugin for Flatfile provides comprehensive validation for numeric data during the import process. It attaches to the `commit:created` listener event to analyze and validate data from specified fields.
Its main purpose is to ensure that numeric data conforms to a wide range of rules before being accepted into the system. Use cases include:
* Enforcing value ranges (e.g., age must be between 18 and 65)
* Validating data formats like integers, currency, or numbers with specific precision and scale
* Ensuring numbers follow specific rules, such as being a multiple of a certain value (step validation)
* Checking for special numeric properties like being even, odd, or prime
* Automatically cleaning up number formats by handling thousands separators and decimal points
* Optionally rounding or truncating numbers before validation
The plugin modifies records by setting the cleaned, parsed numeric value and attaching any validation errors or warnings directly to the relevant field.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-validate-number
```
## Configuration & Parameters
The `validateNumber` function accepts a single configuration object with the following properties:
### Required Parameters
| Parameter | Type | Description |
| --------- | ---------- | ------------------------------------------------------------------------------------- |
| `fields` | `string[]` | An array of field keys (column names) to which the validation rules should be applied |
### Optional Parameters
| Parameter | Type | Default | Description |
| -------------------- | ---------- | --------- | ---------------------------------------------------------------------------------------------------- |
| `sheetSlug` | `string` | `'**'` | The slug of the sheet to apply the validation to. Defaults to all sheets |
| `min` | `number` | undefined | The minimum allowed value for the number |
| `max` | `number` | undefined | The maximum allowed value for the number |
| `inclusive` | `boolean` | `false` | Determines if the `min` and `max` values are inclusive |
| `integerOnly` | `boolean` | `false` | If true, the value must be an integer (no decimal part) |
| `precision` | `number` | undefined | The total maximum number of digits allowed (requires `scale` to be set) |
| `scale` | `number` | undefined | The maximum number of digits allowed after the decimal point (requires `precision` to be set) |
| `currency` | `boolean` | `false` | If true, validates that the number is a valid currency format, allowing up to two decimal places |
| `step` | `number` | undefined | The value must be a multiple of this number |
| `thousandsSeparator` | `string` | `','` | The character used as a thousands separator in the input string |
| `decimalPoint` | `string` | `'.'` | The character used as the decimal point in the input string |
| `specialTypes` | `string[]` | undefined | An array of special number types to validate against. Supported values: `'prime'`, `'even'`, `'odd'` |
| `round` | `boolean` | `false` | If true, the number is rounded to the nearest integer before validation |
| `truncate` | `boolean` | `false` | If true, the decimal part of the number is removed before validation |
## Usage Examples
### Basic Usage
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { validateNumber } from '@flatfile/plugin-validate-number';
const listener = new FlatfileListener();
listener.use(
validateNumber({
fields: ['quantity'],
integerOnly: true,
})
);
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { validateNumber } from '@flatfile/plugin-validate-number';
const listener = new FlatfileListener();
listener.use(
validateNumber({
fields: ['quantity'],
integerOnly: true,
})
);
```
### Currency Validation
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { validateNumber } from '@flatfile/plugin-validate-number';
const listener = new FlatfileListener();
listener.use(
validateNumber({
sheetSlug: 'products',
fields: ['price'],
min: 0,
max: 1000000,
inclusive: true,
currency: true,
precision: 9, // 7 digits before decimal, 2 after
scale: 2,
thousandsSeparator: ',',
decimalPoint: '.',
})
);
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { validateNumber } from '@flatfile/plugin-validate-number';
const listener = new FlatfileListener();
listener.use(
validateNumber({
sheetSlug: 'products',
fields: ['price'],
min: 0,
max: 1000000,
inclusive: true,
currency: true,
precision: 9, // 7 digits before decimal, 2 after
scale: 2,
thousandsSeparator: ',',
decimalPoint: '.',
})
);
```
### Direct Validation Function
```javascript JavaScript
import { validateNumberField } from '@flatfile/plugin-validate-number';
const customValidation = () => {
const inputValue = '1,234.56';
const config = { thousandsSeparator: ',', decimalPoint: '.' };
const result = validateNumberField(inputValue, config);
if (result.errors.length > 0) {
console.error('Validation Errors:', result.errors);
} else if (result.warnings.length > 0) {
console.warn('Validation Warnings:', result.warnings);
} else {
console.log('Validated Value:', result.value); // Outputs: 1234.56
}
};
```
```typescript TypeScript
import { validateNumberField } from '@flatfile/plugin-validate-number';
const customValidation = (): void => {
const inputValue = '1,234.56';
const config = { thousandsSeparator: ',', decimalPoint: '.' };
const result = validateNumberField(inputValue, config);
if (result.errors.length > 0) {
console.error('Validation Errors:', result.errors);
} else if (result.warnings.length > 0) {
console.warn('Validation Warnings:', result.warnings);
} else {
console.log('Validated Value:', result.value); // Outputs: 1234.56
}
};
```
## API Reference
### validateNumber
The main plugin entry point that returns a function for use with `listener.use()`.
**Parameters:**
* `config` (object): Configuration object containing validation rules and target fields
**Returns:**
A function of type `(listener: FlatfileListener) => void` that registers the validation hook.
**Example:**
```javascript JavaScript
listener.use(
validateNumber({
fields: ['score'],
min: 0,
max: 100,
inclusive: true,
})
);
```
```typescript TypeScript
listener.use(
validateNumber({
fields: ['score'],
min: 0,
max: 100,
inclusive: true,
})
);
```
### validateNumberField
A standalone utility function that validates a single value against validation rules.
**Parameters:**
* `value` (string | number): The input value to validate
* `config` (NumberValidationConfig): Configuration object with validation rules
**Returns:**
`NumberValidationResult` object with:
* `value` (number | null): The parsed numeric value or null if parsing failed
* `errors` (string\[]): Array of error messages for fundamental parsing failures
* `warnings` (string\[]): Array of warning messages for validation rule violations
**Error Handling Example:**
```javascript JavaScript
import { validateNumberField } from '@flatfile/plugin-validate-number';
const result = validateNumberField('not-a-number', {});
console.log(result.errors); // Outputs: ['Must be a number']
console.log(result.value); // Outputs: null
const result2 = validateNumberField('99.99', { integerOnly: true });
console.log(result2.warnings); // Outputs: ['Must be an integer']
console.log(result2.value); // Outputs: 99.99
```
```typescript TypeScript
import { validateNumberField } from '@flatfile/plugin-validate-number';
const result = validateNumberField('not-a-number', {});
console.log(result.errors); // Outputs: ['Must be a number']
console.log(result.value); // Outputs: null
const result2 = validateNumberField('99.99', { integerOnly: true });
console.log(result2.warnings); // Outputs: ['Must be an integer']
console.log(result2.value); // Outputs: 99.99
```
### isPrime
A helper function to check if a number is prime. Used internally when `specialTypes: ['prime']` is configured.
**Parameters:**
* `num` (number): The number to check
**Returns:**
`boolean` - True if the number is prime, false otherwise
## Troubleshooting
* **Validation not working**: Check the `fields` and `sheetSlug` configuration to ensure the plugin is targeting the correct data
* **Unexpected number formats**: Verify the `thousandsSeparator` and `decimalPoint` settings match your data format
* **Reference examples**: The test file `src/validate.number.plugin.spec.ts` provides clear examples of expected outcomes for every configuration option
## Notes
### Default Behavior
* The `min`/`max` validation is **exclusive by default**. To include boundary values, set `inclusive: true`
* Default `thousandsSeparator` is `','` and `decimalPoint` is `'.'`
* If both `round` and `truncate` are true, rounding occurs first, then truncation
### Special Considerations
* The plugin operates on the `commit:created` event, running after user submission but before data finalization
* The plugin modifies `FlatfileRecord` in place, overwriting original values with parsed numeric values
* Fundamental parsing failures (e.g., "abc" as a number) result in "errors"
* Rule violations (e.g., number outside min/max range) result in "warnings"
* The plugin follows standard Flatfile patterns by adding errors/warnings to records rather than throwing exceptions
# OpenAPI Schema to Flatfile Blueprint Converter
Source: https://flatfile.com/docs/plugins/openapi-schema
Automatically converts OpenAPI v3.0.3 schemas into Flatfile Blueprints to streamline Space setup using existing API data structures
This plugin automatically converts an OpenAPI v3.0.3 schema into a Flatfile Blueprint. Its primary purpose is to streamline the setup of a Flatfile Space by using an existing OpenAPI specification as the single source of truth for data models. When a new Space is configured, the plugin fetches the specified OpenAPI schema, parses its components, and generates corresponding Workbooks and Sheets with correctly typed fields, constraints, and relationships. This is ideal for developers who want to quickly create a data import experience that matches their existing API data structures without manually defining each field in Flatfile.
## Installation
```bash
npm install @flatfile/plugin-convert-openapi-schema
```
## Configuration & Parameters
The plugin is configured via a single `setupFactory` object passed to the `configureSpaceWithOpenAPI` function.
### setupFactory
The main configuration object containing the following properties:
* **workbooks** (required): `Array` - An array of workbook configurations to create from OpenAPI schemas
* **source** (required): `string` - The URL pointing to the raw OpenAPI schema JSON file
* **sheets** (required): `Array` - An array of sheet configurations, mapping models from the schema to Flatfile Sheets
* **model** (required): `string` - The name of the model in the OpenAPI `components.schemas` section to use for this sheet
* **name** (optional): `string` - A custom display name for the Sheet. Defaults to the `model` name
* **slug** (optional): `string` - A custom slug for the Sheet. Defaults to the `model` name
* **actions** (optional): `Flatfile.Action[]` - An array of actions to add to the sheet
* **name** (optional): `string` - A custom name for the Workbook. Defaults to the `info.title` property from the OpenAPI schema
* **actions** (optional): `Flatfile.Action[]` - An array of actions to add to the workbook
* **space** (optional): `object` - A partial `Flatfile.spaces.SpaceConfig` object to apply custom configurations to the Space
* **debug** (optional): `boolean` - A flag for enabling debug mode
### callback (optional)
An optional function that executes after the Space and Workbooks have been successfully configured:
* **event**: `FlatfileEvent` - The event that triggered the configuration
* **workbookIds**: `string[]` - An array of IDs for the newly created workbooks
* **tick**: `TickFunction` - A function to report progress on the configuration job
## Usage Examples
### Basic Usage
This example sets up a listener that configures a Space with a single Workbook and Sheet based on a remote OpenAPI schema.
```javascript
import { configureSpaceWithOpenAPI } from "@flatfile/plugin-convert-openapi-schema";
export default function (listener) {
listener.on(
"space:configure",
configureSpaceWithOpenAPI({
workbooks: [
{
source:
"https://raw.githubusercontent.com/OAI/OpenAPI-Specification/main/examples/v3.1/webhook-example.json",
sheets: [{ model: "Pet" }],
},
],
})
);
}
```
```typescript
import { configureSpaceWithOpenAPI } from "@flatfile/plugin-convert-openapi-schema";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener: FlatfileListener) {
listener.on(
"space:configure",
configureSpaceWithOpenAPI({
workbooks: [
{
source:
"https://raw.githubusercontent.com/OAI/OpenAPI-Specification/main/examples/v3.1/webhook-example.json",
sheets: [{ model: "Pet" }],
},
],
})
);
}
```
### Custom Configuration
This example shows how to customize the names of the Workbook and Sheet, and how to define multiple Sheets from the same source schema.
```javascript
import { configureSpaceWithOpenAPI } from "@flatfile/plugin-convert-openapi-schema";
export default function (listener) {
listener.on(
"space:configure",
configureSpaceWithOpenAPI({
workbooks: [
{
name: "My Custom Workbook",
source: "https://api.merge.dev/api/accounting/v1/schema",
sheets: [
{ model: "Account" },
{ model: "Address", name: "Mailing Addresses", slug: "addresses" },
{ model: "Invoice" },
],
},
],
})
);
}
```
```typescript
import { configureSpaceWithOpenAPI } from "@flatfile/plugin-convert-openapi-schema";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener: FlatfileListener) {
listener.on(
"space:configure",
configureSpaceWithOpenAPI({
workbooks: [
{
name: "My Custom Workbook",
source: "https://api.merge.dev/api/accounting/v1/schema",
sheets: [
{ model: "Account" },
{ model: "Address", name: "Mailing Addresses", slug: "addresses" },
{ model: "Invoice" },
],
},
],
})
);
}
```
### Advanced Usage with Callback
This example demonstrates adding custom actions to Workbooks and Sheets, and using the optional callback function to perform additional setup tasks after the Space is configured.
```javascript
import api from "@flatfile/api";
import { configureSpaceWithOpenAPI } from "@flatfile/plugin-convert-openapi-schema";
export default function (listener) {
const callback = async (event, workbookIds, tick) => {
const { spaceId } = event.context;
// Create a getting started document
await api.documents.create(spaceId, {
title: "Getting Started",
body: "Welcome! Follow the steps to import your data.
",
});
await tick(100, "Space setup complete");
};
listener.use(
configureSpaceWithOpenAPI(
{
workbooks: [
{
source: "https://api.merge.dev/api/accounting/v1/schema",
actions: [
{
operation: "submitActionFg",
mode: "foreground",
label: "Submit",
primary: true,
},
],
sheets: [{ model: "Account" }, { model: "Invoice" }],
},
],
},
callback
)
);
}
```
```typescript
import api from "@flatfile/api";
import type { FlatfileListener } from "@flatfile/listener";
import { configureSpaceWithOpenAPI } from "@flatfile/plugin-convert-openapi-schema";
export default function (listener: FlatfileListener) {
const callback = async (event, workbookIds, tick) => {
const { spaceId } = event.context;
// Create a getting started document
await api.documents.create(spaceId, {
title: "Getting Started",
body: "Welcome! Follow the steps to import your data.
",
});
await tick(100, "Space setup complete");
};
listener.use(
configureSpaceWithOpenAPI(
{
workbooks: [
{
source: "https://api.merge.dev/api/accounting/v1/schema",
actions: [
{
operation: "submitActionFg",
mode: "foreground",
label: "Submit",
primary: true,
},
],
sheets: [{ model: "Account" }, { model: "Invoice" }],
},
],
},
callback
)
);
}
```
## Troubleshooting
If a sheet fails to generate, check the server logs for the message: `Schema not found for table name [sheet slug]`. This indicates that the `model` name provided in your configuration does not match any model name found in the `components.schemas` section of your OpenAPI file. Ensure the `model` name is spelled correctly and exists in the schema.
Common error scenarios:
* **Network errors**: If the `source` URL cannot be reached or returns a non-200 status code, an error like `Error fetching or processing schema: API returned status 404: Not Found` will be thrown
* **Missing models**: If a model specified in the `sheets` configuration is not found in the schema, an error is logged to the console and that sheet is skipped
* **Invalid schema**: Malformed OpenAPI schemas will cause the configuration job to fail
## Notes
### Default Behavior
* The plugin creates one Workbook for each entry in the `workbooks` array
* Workbook names default to the `info.title` field from the OpenAPI schema
* Sheet names and slugs default to the `model` name
* Fields are generated based on the model's properties, with OpenAPI types (`string`, `number`, `integer`, `boolean`) mapped to corresponding Flatfile field types
* Properties marked as `required` in the schema will have a `required` constraint in Flatfile
### Limitations and Considerations
* This plugin is designed for server-side execution within a Flatfile listener
* Officially supports OpenAPI version 3.0.3 (compatibility with other versions is not guaranteed)
* Depends on `@flatfile/plugin-space-configure` to apply the generated configuration
* OpenAPI `array` types are mapped to Flatfile's `string-list` type by default
* Complex arrays (arrays of objects) are not explicitly handled and may not convert as expected
* Schema references (`$ref`) are resolved only within the same document (e.g., `#/components/schemas/ModelName`)
* External file references are not supported
* Errors are logged to the console and re-thrown, which can aid in debugging within the listener's execution environment
# PDF Extractor Plugin
Source: https://flatfile.com/docs/plugins/pdf-extractor
Extract tabular data from PDF files and convert them to CSV format using the pdftables.com service
The PDF Extractor plugin is designed to process PDF files uploaded to Flatfile. Its main purpose is to extract tabular data from a PDF document and make it available for import. When a user uploads a PDF file, this plugin automatically sends the file to the external `pdftables.com` service, which converts the data into a CSV format. The resulting CSV file is then uploaded back into the same Flatfile Space as a new file, ready to be mapped and imported. This is useful for workflows where source data is provided in PDF reports or documents instead of standard spreadsheet formats.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-pdf-extractor
```
## Configuration & Parameters
The `pdfExtractorPlugin` function accepts a configuration object with the following options:
| Parameter | Type | Required | Default | Description |
| --------- | --------- | -------- | ------- | ------------------------------------------ |
| `apiKey` | `string` | Yes | - | Your API key from `pdftables.com` service |
| `debug` | `boolean` | No | `false` | Enable verbose logging for troubleshooting |
### Default Behavior
By default, the plugin operates with debugging messages turned off. It will listen for any `.pdf` file uploaded in `import` mode. If a matching file is detected, it will attempt the conversion process. If the required `apiKey` is not provided, the plugin will log an error and fail silently.
## Usage Examples
```javascript
// listener.js
import { pdfExtractorPlugin } from "@flatfile/plugin-pdf-extractor";
export default function (listener) {
// A pdftables.com API key is required
listener.use(pdfExtractorPlugin({
apiKey: "YOUR_PDFTABLES_API_KEY"
}));
}
```
```typescript
// listener.ts
import { FlatfileListener } from "@flatfile/listener";
import { pdfExtractorPlugin } from "@flatfile/plugin-pdf-extractor";
export default function (listener: FlatfileListener) {
// A pdftables.com API key is required
listener.use(pdfExtractorPlugin({
apiKey: "YOUR_PDFTABLES_API_KEY"
}));
}
```
### Configuration with Debug Mode
```javascript
// listener.js
import { pdfExtractorPlugin } from "@flatfile/plugin-pdf-extractor";
export default function (listener) {
// Using the plugin with debug option enabled for detailed logging
listener.use(
pdfExtractorPlugin({
apiKey: "YOUR_PDFTABLES_API_KEY",
debug: true,
})
);
}
```
```typescript
// listener.ts
import { FlatfileListener } from "@flatfile/listener";
import { pdfExtractorPlugin } from "@flatfile/plugin-pdf-extractor";
export default function (listener: FlatfileListener) {
// Using the plugin with debug option enabled for detailed logging
listener.use(
pdfExtractorPlugin({
apiKey: "YOUR_PDFTABLES_API_KEY",
debug: true,
})
);
}
```
## Troubleshooting
If the plugin is not working properly, follow these troubleshooting steps:
1. **Check API Key**: Ensure that a valid `apiKey` has been provided in the configuration
2. **Enable Debug Mode**: Set `debug: true` in the plugin configuration to get detailed logs in your listener's console output
3. **Network Access**: Ensure the listener environment has network access to `pdftables.com`
### Error Handling
The plugin has built-in error handling and will log errors to the console. Common errors include:
* "Found invalid API key"
* "Failed to convert PDF on pdftables.com"
* "Error writing file to disk"
* "Failed to upload PDF->CSV file"
```javascript
// Enable debug mode to see detailed error messages
listener.use(
pdfExtractorPlugin({
apiKey: "YOUR_PDFTABLES_API_KEY",
debug: true, // This will log errors from the API call or file upload
})
);
```
```typescript
// Enable debug mode to see detailed error messages
listener.use(
pdfExtractorPlugin({
apiKey: "YOUR_PDFTABLES_API_KEY",
debug: true, // This will log errors from the API call or file upload
})
);
```
## Notes
### Requirements and Limitations
* **External Dependency**: This plugin requires a subscription and a valid API key from the third-party service `pdftables.com`
* **File Scope**: The plugin only processes files with a `.pdf` extension that are uploaded in `import` mode. It will ignore all other files
* **Environment**: This plugin is intended to be run in a server-side listener environment (e.g., Node.js) as it interacts with the file system
### Processing Details
* **Asynchronous Processing**: The PDF conversion and re-upload process is asynchronous. After a user uploads a PDF, the new converted CSV file will appear in the "Files" list in the Flatfile Space a few moments later
* **File Naming**: The converted file will be named based on the original, with ` (Converted PDF)-{timestamp}.csv` appended to it
# Phone Number Validation and Formatting Plugin
Source: https://flatfile.com/docs/plugins/phone
Validates and formats phone numbers in Flatfile using country-specific validation with libphonenumber-js library
This plugin validates phone numbers in a specified field against a corresponding country code from another field. It leverages the `libphonenumber-js` library for robust, country-specific validation. The primary use case is to ensure that phone number data ingested into Flatfile is correctly formatted and valid. It can automatically correct and reformat phone numbers into various standard formats (e.g., NATIONAL, INTERNATIONAL, E164) or simply add an error to the record if the number is invalid.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-validate-phone
```
## Configuration & Parameters
The plugin accepts a configuration object with the following parameters:
| Parameter | Type | Required | Default | Description |
| --------------- | ------- | -------- | ------------ | ------------------------------------------------------------------------------------------- |
| `phoneField` | string | Yes | - | The API key (name) of the field containing the phone number to validate |
| `countryField` | string | Yes | - | The API key (name) of the field containing the country code (e.g., 'US', 'GB') |
| `sheetSlug` | string | No | `'**'` | The slug of the sheet to apply the validation to. Defaults to all sheets |
| `autoConvert` | boolean | No | `true` | If true, automatically updates the phone number field with the correctly formatted version |
| `format` | string | No | `'NATIONAL'` | The desired output format: 'NATIONAL', 'INTERNATIONAL', 'E164', 'RFC3966', or 'SIGNIFICANT' |
| `concurrency` | number | No | `10` | Number of records to process concurrently |
| `debug` | boolean | No | `false` | Enables verbose debug logging in the console |
| `formatOptions` | object | No | - | Additional formatting options for libphonenumber-js library |
## Usage Examples
### Basic Usage
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import validatePhone from '@flatfile/plugin-validate-phone';
export default function (listener) {
listener.use(validatePhone({
phoneField: 'phone',
countryField: 'country',
autoConvert: false
}));
}
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import validatePhone from '@flatfile/plugin-validate-phone';
export default function (listener: FlatfileListener) {
listener.use(validatePhone({
phoneField: 'phone',
countryField: 'country',
autoConvert: false
}));
}
```
This example validates the 'phone' field using the 'country' field in all sheets. It will add errors for invalid numbers but will not automatically reformat them.
### Configuration Example
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import validatePhone from '@flatfile/plugin-validate-phone';
export default function (listener) {
listener.use(validatePhone({
sheetSlug: 'contacts',
phoneField: 'phone_number',
countryField: 'country_code',
autoConvert: true,
format: 'INTERNATIONAL',
debug: true
}));
}
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import validatePhone from '@flatfile/plugin-validate-phone';
export default function (listener: FlatfileListener) {
listener.use(validatePhone({
sheetSlug: 'contacts',
phoneField: 'phone_number',
countryField: 'country_code',
autoConvert: true,
format: 'INTERNATIONAL',
debug: true
}));
}
```
This example validates and formats phone numbers in the "contacts" sheet using the 'phone\_number' and 'country\_code' fields. Valid numbers are automatically converted to INTERNATIONAL format.
### Advanced Usage with Format Options
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import validatePhone from '@flatfile/plugin-validate-phone';
export default function (listener) {
listener.use(validatePhone({
sheetSlug: 'my-sheet',
phoneField: 'phone_number',
countryField: 'country_code',
format: 'INTERNATIONAL',
formatOptions: { formatExtension: 'national' }
}));
}
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import validatePhone from '@flatfile/plugin-validate-phone';
export default function (listener: FlatfileListener) {
listener.use(validatePhone({
sheetSlug: 'my-sheet',
phoneField: 'phone_number',
countryField: 'country_code',
format: 'INTERNATIONAL',
formatOptions: { formatExtension: 'national' }
}));
}
```
This example demonstrates using the formatOptions parameter to provide more specific formatting rules from the underlying libphonenumber-js library.
## Troubleshooting
If validation is not working as expected:
1. **Check field names**: Ensure that the `phoneField` and `countryField` values in the configuration exactly match the field keys in your Sheet template.
2. **Verify country codes**: Ensure the `countryField` in your data contains valid two-letter ISO 3166-1 alpha-2 country codes (e.g., "US", "GB", "DE").
3. **Enable debug logging**: Set the `debug` option to `true` in the configuration to see more detailed logging output in your environment's console, which can help diagnose issues.
## Notes
### Default Behavior
* By default, the plugin applies to all sheets (`sheetSlug: '**'`)
* Phone numbers are automatically converted to the specified format when `autoConvert` is `true` (default)
* The default format is 'NATIONAL'
* Records are processed with a concurrency of 10
### Error Handling
The plugin adds errors to specific fields rather than throwing exceptions:
* **Empty phone number**: "Phone number is required"
* **Empty country field**: "Country is required for phone number formatting"
* **Invalid phone number**: "Invalid phone number format for \[country]"
### Requirements and Limitations
* This plugin has an external dependency on `libphonenumber-js`
* The `countryField` must contain a valid two-letter ISO 3166-1 alpha-2 country code
* The plugin operates as a listener on the `commit:created` event, processing records in batches
* While the documentation mentions specific support for US, UK, India, Germany, France, and Brazil, the underlying library supports a wider range of countries
# Pivot Table Exporter
Source: https://flatfile.com/docs/plugins/pivot-table
A Flatfile plugin that performs data analysis and summarization by generating pivot tables from workbook data and uploading them as Markdown documents.
The Pivot Table Exporter plugin for Flatfile is designed to perform data analysis and summarization directly within the Flatfile environment. It listens for a specific job trigger ('workbook:generatePivotTable'), fetches all records from the first sheet in a workbook, and then generates a pivot table based on user-defined configuration. The resulting pivot table is formatted into a Markdown document and uploaded back to the Flatfile space. This is useful for users who need to quickly aggregate, group, and analyze data during an import process without leaving the Flatfile UI. For example, it can be used to summarize sales data by region and category, or count records based on different attributes.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-export-pivot-table
```
## Configuration & Parameters
The plugin accepts a configuration object with the following parameters:
### Required Parameters
The name of the column in your sheet to use as the main pivot (rows in the output table).
The name of the column containing the numerical data to be aggregated.
The mathematical operation to perform on the `aggregateColumn`.
### Optional Parameters
An optional column to group the data by. If provided, this will create distinct columns in the output table for each unique value in the `groupByColumn`.
If set to true, the plugin will log any caught errors to the console, which is useful for server-side debugging.
### Default Behavior
By default, the plugin requires `pivotColumn`, `aggregateColumn`, and `aggregationMethod` to be defined. If `groupByColumn` is not provided, the plugin will aggregate data across all records for each unique `pivotColumn` value and present the result in a single "Total" column. The `debug` mode is disabled by default, meaning errors are only reported back to the Flatfile job status without being logged to the console.
## Usage Examples
### Basic Usage
This example shows the simplest way to use the plugin with the required configuration.
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { pivotTablePlugin } from '@flatfile/plugin-export-pivot-table';
export default function(listener) {
listener.use(
pivotTablePlugin({
pivotColumn: 'country',
aggregateColumn: 'order_value',
aggregationMethod: 'sum',
})
);
}
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { pivotTablePlugin } from '@flatfile/plugin-export-pivot-table';
export default function(listener: FlatfileListener) {
listener.use(
pivotTablePlugin({
pivotColumn: 'country',
aggregateColumn: 'order_value',
aggregationMethod: 'sum',
})
);
}
```
### Configuration with Optional Parameters
This example demonstrates a more complete configuration, including the optional `groupByColumn` and `debug` options.
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { pivotTablePlugin } from '@flatfile/plugin-export-pivot-table';
export default function(listener) {
listener.use(
pivotTablePlugin({
pivotColumn: 'Region',
aggregateColumn: 'Sales',
aggregationMethod: 'average',
groupByColumn: 'Category',
debug: true,
})
);
}
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { pivotTablePlugin } from '@flatfile/plugin-export-pivot-table';
export default function(listener: FlatfileListener) {
listener.use(
pivotTablePlugin({
pivotColumn: 'Region',
aggregateColumn: 'Sales',
aggregationMethod: 'average',
groupByColumn: 'Category',
debug: true,
})
);
}
```
### Error Handling with Debug Mode
The plugin has built-in error handling. To see more detailed logs on your server, enable the `debug` flag.
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { pivotTablePlugin } from '@flatfile/plugin-export-pivot-table';
const listener = new FlatfileListener();
// Enable debug mode to log errors to the server console
listener.use(
pivotTablePlugin({
pivotColumn: 'non_existent_column', // This will cause an error
aggregateColumn: 'Sales',
aggregationMethod: 'sum',
debug: true,
})
);
export default listener;
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { pivotTablePlugin } from '@flatfile/plugin-export-pivot-table';
const listener = new FlatfileListener();
// Enable debug mode to log errors to the server console
listener.use(
pivotTablePlugin({
pivotColumn: 'non_existent_column', // This will cause an error
aggregateColumn: 'Sales',
aggregationMethod: 'sum',
debug: true,
})
);
export default listener;
```
## Troubleshooting
If you encounter issues with the pivot table plugin, follow these troubleshooting steps:
1. **Check the job outcome message**: If a job fails, check the "outcome" message on the job in the Flatfile UI for initial error information.
2. **Enable debug mode**: If the error message is not clear, re-run the job with the `debug: true` configuration option enabled and check your server-side logs for a more detailed error stack trace.
3. **Verify column names**: Common errors include providing incorrect column names in the configuration that do not match the columns in the sheet.
4. **Check data types**: Ensure that the `aggregateColumn` contains numeric data. Non-numeric values will be treated as 0.
## Notes
### Requirements and Limitations
* The plugin must be deployed in a server-side listener environment
* It triggers on a specific job: `job: 'workbook:generatePivotTable'`. This job must be initiated from the Flatfile UI or via an API call
* The plugin processes data from the first sheet it finds in the workbook (`sheets.data[0]`). It does not currently support selecting a specific sheet by name or ID
* The values in the `aggregateColumn` are expected to be numbers. Non-numeric values will be treated as 0
### Error Handling
The plugin uses a `try...catch` block to manage errors during its execution:
* On start, it acknowledges the job with `api.jobs.ack`
* On success, it completes the job with `api.jobs.complete` and a success message
* On failure, it fails the job with `api.jobs.fail` and an error message
* If the `debug: true` option is set, the caught error object is also logged to the console
# Record Hook Plugin
Source: https://flatfile.com/docs/plugins/record-hook
Execute custom logic on individual data records within a Flatfile Sheet with support for validation, transformation, enrichment, and cleaning.
The Record Hook plugin provides a way to execute custom logic on individual data records within a Flatfile Sheet. It works by attaching a listener to the `commit:created` event, which fires when new data is added or existing data is updated.
This plugin is ideal for a variety of use cases, including:
* **Data Validation:** Implementing complex validation rules that go beyond Flatfile's built-in validators, such as checking for valid email formats or ensuring conditional field requirements.
* **Data Transformation:** Modifying data on the fly, like capitalizing names, standardizing formats, or setting default values.
* **Data Enrichment:** Augmenting records with additional information from external sources.
* **Data Cleaning:** Removing whitespace, correcting common typos, or normalizing values.
The plugin offers two main functions: `recordHook` for processing records one-by-one, and `bulkRecordHook` for processing records in batches, which is more efficient for large datasets.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-record-hook
```
## Configuration & Parameters
### recordHook
Processes individual records one at a time. The provided callback function is executed for each record in the commit.
**Parameters:**
* `sheetSlug` (string, required): The slug of the Sheet you want the hook to apply to
* `callback` (function, required): A function that contains your custom logic. It receives the record and the event object as arguments
* `options` (object, optional): Configuration options
* `concurrency` (number, default: 10): Controls how many individual record handlers can run in parallel
* `debug` (boolean, default: false): When set to `true`, enables verbose logging to the console
### bulkRecordHook
Processes records in batches (chunks). The provided callback function is executed for each chunk of records.
**Parameters:**
* `sheetSlug` (string, required): The slug of the Sheet you want the hook to apply to
* `callback` (function, required): A function that contains your custom logic. It receives an array of records and the event object as arguments
* `options` (object, optional): Configuration options
* `chunkSize` (number, default: 10000): Specifies the number of records to process in each batch. Maximum recommended value is 5000
* `parallel` (number, default: 1): Specifies how many chunks of records to process in parallel
* `debug` (boolean, default: false): When set to `true`, enables verbose logging to the console
**Default Behavior:**
By default, the plugin processes records for the specified `sheetSlug` after a commit is created. `bulkRecordHook` processes all records in a single chunk sequentially (`parallel: 1`). `recordHook` processes up to 10 records concurrently. Debug logging is disabled.
## Usage Examples
### Basic Single Record Processing
```javascript JavaScript
import { recordHook } from "@flatfile/plugin-record-hook";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener) {
listener.use(
recordHook("contacts", (record) => {
const firstName = record.get("firstName");
if (firstName) {
record.set("firstName", firstName.charAt(0).toUpperCase() + firstName.slice(1));
}
return record;
})
);
}
```
```typescript TypeScript
import { recordHook } from "@flatfile/plugin-record-hook";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener: FlatfileListener) {
listener.use(
recordHook("contacts", (record) => {
const firstName = record.get("firstName") as string;
if (firstName) {
record.set("firstName", firstName.charAt(0).toUpperCase() + firstName.slice(1));
}
return record;
})
);
}
```
### Basic Bulk Record Processing
```javascript JavaScript
import { bulkRecordHook } from "@flatfile/plugin-record-hook";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener) {
listener.use(
bulkRecordHook("contacts", (records) => {
records.forEach((record) => {
const firstName = record.get("firstName");
if (firstName) {
record.set("firstName", firstName.charAt(0).toUpperCase() + firstName.slice(1));
}
});
return records;
})
);
}
```
```typescript TypeScript
import { bulkRecordHook } from "@flatfile/plugin-record-hook";
import { FlatfileListener } from "@flatfile/listener";
import { FlatfileRecord } from "@flatfile/hooks";
export default function (listener: FlatfileListener) {
listener.use(
bulkRecordHook("contacts", (records: FlatfileRecord[]) => {
records.forEach((record) => {
const firstName = record.get("firstName") as string;
if (firstName) {
record.set("firstName", firstName.charAt(0).toUpperCase() + firstName.slice(1));
}
});
return records;
})
);
}
```
### Configuration Example
```javascript JavaScript
import { bulkRecordHook } from "@flatfile/plugin-record-hook";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener) {
listener.use(
bulkRecordHook(
"contacts",
(records) => {
// Your processing logic here
return records;
},
{ chunkSize: 1000, parallel: 5, debug: true }
)
);
}
```
```typescript TypeScript
import { bulkRecordHook } from "@flatfile/plugin-record-hook";
import { FlatfileListener } from "@flatfile/listener";
import { FlatfileRecord } from "@flatfile/hooks";
export default function (listener: FlatfileListener) {
listener.use(
bulkRecordHook(
"contacts",
(records: FlatfileRecord[]) => {
// Your processing logic here
return records;
},
{ chunkSize: 1000, parallel: 5, debug: true }
)
);
}
```
### Advanced Usage - Email Validation
```javascript JavaScript
import { recordHook } from "@flatfile/plugin-record-hook";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener) {
listener.use(
recordHook("contacts", (record) => {
const email = record.get("email");
if (!email) {
record.addError("email", "Email address is required.");
return record;
}
const validEmailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
if (!validEmailRegex.test(email)) {
record.addError("email", "Please enter a valid email address.");
}
return record;
})
);
}
```
```typescript TypeScript
import { recordHook } from "@flatfile/plugin-record-hook";
import { FlatfileListener } from "@flatfile/listener";
export default function (listener: FlatfileListener) {
listener.use(
recordHook("contacts", (record) => {
const email = record.get("email") as string;
if (!email) {
record.addError("email", "Email address is required.");
return record;
}
const validEmailRegex = /^[^\s@]+@[^\s@]+\.[^\s@]+$/;
if (!validEmailRegex.test(email)) {
record.addError("email", "Please enter a valid email address.");
}
return record;
})
);
}
```
## Troubleshooting
The most effective way to troubleshoot issues is to set the `debug: true` option in the plugin configuration. This will print detailed logs to the console, showing the timing and status of each major step: fetching data, running the handler, filtering modified records, and updating records. This can help identify performance bottlenecks or see why records are not being updated as expected.
## Notes
### Special Considerations
* The plugin operates on the `commit:created` event. This means it runs after Flatfile's initial parsing and validation but before the data is finalized.
* To save changes, your callback function **must** return the modified record (for `recordHook`) or the array of modified records (for `bulkRecordHook`). If you return `null`, `undefined`, or nothing, your changes will not be persisted.
* The plugin intelligently detects which records have actually been modified and only sends those records back to the Flatfile API for an update, optimizing performance.
### Limitations
* The `chunkSize` for `bulkRecordHook` has a recommended maximum of 5000 records to ensure stability and performance.
### Error Handling
* The plugin is designed to be robust. It wraps the execution of the user-provided callback in a `try...catch` block.
* If an error is thrown inside your callback, the plugin will log the error message to the console with a `[FATAL]` prefix.
* Crucially, it will then ensure the commit is marked as complete with the Flatfile API, preventing the import process from stalling due to an error in custom code.
# Rollout: Automatic Workbook Updates
Source: https://flatfile.com/docs/plugins/rollout
Automatically apply schema changes to existing, live workbooks when a new version of a Flatfile Agent is deployed
The Rollout plugin automates the process of applying schema changes to existing, live workbooks. When a new version of a Flatfile Agent is deployed (triggering `agent:created` or `agent:updated` events), this plugin identifies relevant workbooks and initiates an update process.
Its primary use case is to ensure that all workbooks associated with a particular configuration stay up-to-date with the latest schema definitions without manual intervention. It works by creating a job to apply the new schema via a user-defined `updater` function. After the schema is updated, it cleverly re-triggers all data hooks on all records in the updated sheets, ensuring data validity and transformations are re-evaluated against the new schema.
This plugin should be deployed in a server-side listener.
## Installation
```bash
npm install @flatfile/plugin-rollout
```
## Configuration & Parameters
The rollout plugin accepts a configuration object with the following parameters:
### `namespace` (required)
* **Type:** `string`
* **Description:** A string used to filter which spaces the plugin should operate on. The plugin will only consider spaces matching this namespace. The expected format is typically `workbook:your-namespace`.
### `dev` (required)
* **Type:** `boolean`
* **Description:** When set to `true`, the plugin will also trigger an update check when the agent starts up in a local development environment (i.e., not running on AWS Lambda). This is useful for testing changes locally without needing to deploy a new agent version.
* **Default:** Effectively `false` - updates in local development are suppressed unless this is explicitly set to `true`.
### `updater` (required)
* **Type:** `(space: Flatfile.Space, workbooks: Flatfile.Workbook[]) => Promise`
* **Description:** An asynchronous callback function you provide to perform the actual schema updates. It receives the `space` being processed and a list of its `workbooks`. You are responsible for using the Flatfile API within this function to apply your new schema to the workbooks. The function should return an array of the workbooks that were successfully updated.
## Usage Examples
```javascript JavaScript - Basic Usage
import { FlatfileListener } from '@flatfile/listener'
import { rollout } from '@flatfile/plugin-rollout'
import { FlatfileClient } from '@flatfile/api'
const listener = new FlatfileListener()
const api = new FlatfileClient()
const rolloutPlugin = rollout({
namespace: 'workbook:my-app',
dev: process.env.NODE_ENV === 'development',
updater: async (space, workbooks) => {
// In a real scenario, you would update the workbook's sheets
// with a new schema configuration here.
console.log(`Updating workbooks in space ${space.id}`)
// For example:
// const newSchema = require('./schemas/my-new-schema')
// await api.sheets.update(workbooks[0].sheets[0].id, { config: newSchema.config })
// Return the workbooks you updated to re-trigger hooks.
return workbooks
},
})
// Register the job handler on a filtered listener
listener.namespace(['workbook:my-app'], (filteredListener) => {
filteredListener.use(rolloutPlugin)
})
// Register the root event handler on the main listener
// This is required to catch the agent deployment events.
listener.use(rolloutPlugin.root)
```
```typescript TypeScript - Basic Usage
import { FlatfileListener } from '@flatfile/listener'
import { rollout } from '@flatfile/plugin-rollout'
import { FlatfileClient } from '@flatfile/api'
import { Flatfile } from '@flatfile/api'
const listener = new FlatfileListener()
const api = new FlatfileClient()
const rolloutPlugin = rollout({
namespace: 'workbook:my-app',
dev: process.env.NODE_ENV === 'development',
updater: async (space: Flatfile.Space, workbooks: Flatfile.Workbook[]) => {
// In a real scenario, you would update the workbook's sheets
// with a new schema configuration here.
console.log(`Updating workbooks in space ${space.id}`)
// For example:
// const newSchema = require('./schemas/my-new-schema')
// await api.sheets.update(workbooks[0].sheets[0].id, { config: newSchema.config })
// Return the workbooks you updated to re-trigger hooks.
return workbooks
},
})
// Register the job handler on a filtered listener
listener.namespace(['workbook:my-app'], (filteredListener) => {
filteredListener.use(rolloutPlugin)
})
// Register the root event handler on the main listener
// This is required to catch the agent deployment events.
listener.use(rolloutPlugin.root)
```
```javascript JavaScript - Advanced Configuration
import { FlatfileListener } from '@flatfile/listener'
import { rollout } from '@flatfile/plugin-rollout'
import { FlatfileClient } from '@flatfile/api'
import { myNewSheetConfig } from './new-sheet.config'
const listener = new FlatfileListener()
const api = new FlatfileClient()
const rolloutPluginWithConfig = rollout({
// Only act on spaces with this namespace
namespace: 'workbook:customer-onboarding',
// Enable local dev updates
dev: true,
// Provide the logic to update the workbook schemas
updater: async (space, workbooks) => {
const updatedWorkbooks = []
for (const workbook of workbooks) {
// Assuming we want to update the first sheet of each workbook
if (workbook.sheets && workbook.sheets.length > 0) {
const sheetId = workbook.sheets[0].id
try {
await api.sheets.update(sheetId, {
config: myNewSheetConfig,
})
console.log(`Successfully updated schema for sheet ${sheetId}`)
updatedWorkbooks.push(workbook)
} catch (e) {
console.error(`Failed to update sheet ${sheetId}:`, e)
}
}
}
// Return only the workbooks that were successfully updated
return updatedWorkbooks
},
})
// The plugin returns two parts that must both be registered.
listener.namespace(['workbook:customer-onboarding'], (filteredListener) => {
filteredListener.use(rolloutPluginWithConfig)
})
listener.use(rolloutPluginWithConfig.root)
```
```typescript TypeScript - Advanced Configuration
import { FlatfileListener } from '@flatfile/listener'
import { rollout } from '@flatfile/plugin-rollout'
import { FlatfileClient } from '@flatfile/api'
import { Flatfile } from '@flatfile/api'
import { myNewSheetConfig } from './new-sheet.config'
const listener = new FlatfileListener()
const api = new FlatfileClient()
const rolloutPluginWithConfig = rollout({
// Only act on spaces with this namespace
namespace: 'workbook:customer-onboarding',
// Enable local dev updates
dev: true,
// Provide the logic to update the workbook schemas
updater: async (space: Flatfile.Space, workbooks: Flatfile.Workbook[]) => {
const updatedWorkbooks: Flatfile.Workbook[] = []
for (const workbook of workbooks) {
// Assuming we want to update the first sheet of each workbook
if (workbook.sheets && workbook.sheets.length > 0) {
const sheetId = workbook.sheets[0].id
try {
await api.sheets.update(sheetId, {
config: myNewSheetConfig,
})
console.log(`Successfully updated schema for sheet ${sheetId}`)
updatedWorkbooks.push(workbook)
} catch (e) {
console.error(`Failed to update sheet ${sheetId}:`, e)
}
}
}
// Return only the workbooks that were successfully updated
return updatedWorkbooks
},
})
// The plugin returns two parts that must both be registered.
listener.namespace(['workbook:customer-onboarding'], (filteredListener) => {
filteredListener.use(rolloutPluginWithConfig)
})
listener.use(rolloutPluginWithConfig.root)
```
## API Reference
### `rollout(config)`
Initializes the rollout plugin. It sets up listeners for agent deployment events and a job handler to perform the updates. It relies on a user-provided `updater` function to apply the specific schema changes.
**Parameters:**
* `config` (object): Configuration object with the following properties:
* `namespace` (string, required): The namespace to filter spaces for updates
* `dev` (boolean, required): Set to `true` to enable updates on local agent reloads
* `updater` (function, required): Async function that performs the schema update
**Return Value:**
A listener callback function with a `.root` property (also a listener callback). The main callback handles the `space:auto-update` job, while the `.root` callback handles `agent:created` and `agent:updated` events.
```javascript JavaScript - Error Handling
const rolloutWithErrorHandling = rollout({
namespace: 'workbook:my-app',
dev: true,
updater: async (space, workbooks) => {
const updated = []
for (const wb of workbooks) {
try {
// Attempt to update a sheet
const sheetId = wb.sheets[0].id
// ... your api call to update the sheet ...
console.log(`Updated sheet ${sheetId}`)
updated.push(wb)
} catch (error) {
console.error(`Could not update workbook ${wb.id}. Reason:`, error)
// Continue to the next workbook without stopping the whole process
}
}
return updated
},
})
```
```typescript TypeScript - Error Handling
const rolloutWithErrorHandling = rollout({
namespace: 'workbook:my-app',
dev: true,
updater: async (space: Flatfile.Space, workbooks: Flatfile.Workbook[]) => {
const updated: Flatfile.Workbook[] = []
for (const wb of workbooks) {
try {
// Attempt to update a sheet
const sheetId = wb.sheets[0].id
// ... your api call to update the sheet ...
console.log(`Updated sheet ${sheetId}`)
updated.push(wb)
} catch (error) {
console.error(`Could not update workbook ${wb.id}. Reason:`, error)
// Continue to the next workbook without stopping the whole process
}
}
return updated
},
})
```
## Troubleshooting
### Updates not triggering
* Ensure the `agent:created` or `agent:updated` events are firing upon deployment
* Check that the target space has the correct namespace and the required secret (`FF_AUTO_UPDATE` or `FF_AUTO_UPDATE_DEV`) is set to `'true'`
### Local updates not working
* Make sure you have set `dev: true` in the plugin configuration
### Hooks not re-running
* Verify that your `updater` function is returning an array containing the workbooks that you successfully updated
* If an empty or `undefined` value is returned, the hook re-triggering step will be skipped
## Notes
### Default Behavior
By default, the plugin will only trigger updates for production spaces that have a secret named `FF_AUTO_UPDATE` with a value of `'true'`. If the `dev: true` option is used, it will trigger updates for development spaces that have a secret named `FF_AUTO_UPDATE_DEV` with a value of `'true'`. This secret-based mechanism acts as an explicit opt-in for each space, preventing accidental updates.
### Special Considerations
* **Dual Listener Registration:** The plugin returns a function with a `.root` property. You MUST register both. The main function handles the job execution and should be on a namespaced listener. The `.root` function handles the global events that trigger the job and must be on the root listener.
* **Server-Side Only:** This plugin is designed to run in a server-side listener environment, not in the browser.
* **Opt-In via Secrets:** The plugin will not update a space unless it contains a specific secret. For production environments, a secret named `FF_AUTO_UPDATE` must exist with the value `'true'`. For local development (with `dev: true`), the secret must be `FF_AUTO_UPDATE_DEV` with the value `'true'`.
* **Hook Re-triggering:** To re-run data hooks after a schema change, the plugin updates the metadata of every record in the updated sheets by adding a `_autoUpdateKey` with a random UUID. This modification forces Flatfile to re-evaluate each record.
### API Permissions
The agent using this plugin requires API permissions for the following actions:
* `space:read` (for `spaces.list`, `spaces.get`)
* `secret:read` (for `secrets.list`)
* `workbook:read` (for `workbooks.list`)
* `job:write` (for `jobs.create`)
* Your `updater` function will likely require additional permissions, such as `sheet:write` to update sheet configurations.
# Sentiment Analysis Plugin
Source: https://flatfile.com/docs/plugins/sentiment
Automatically analyze sentiment in text fields and add sentiment scores and categories to your Flatfile records
This plugin analyzes the sentiment of text within specified fields of records in a Flatfile Sheet. It operates during the data processing phase by hooking into the record commit lifecycle. For each designated text field, the plugin calculates a sentiment score, categorizes it as 'positive', 'negative', or 'neutral', and adds this information back to the record in two new fields: `_sentiment_score` and `_sentiment_category`. It also adds informational messages to the record detailing the outcome of the analysis. This is useful for automatically classifying customer feedback, product reviews, support tickets, or any other free-text data to quickly gauge user sentiment.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-enrich-sentiment
```
## Configuration & Parameters
The plugin accepts a configuration object with the following parameters:
### `sheetSlug` (required)
* **Type:** `string`
* **Description:** The slug of the sheet that this plugin should run on.
### `textFields`
* **Type:** `string[]`
* **Description:** An array of field keys (column names) that contain the text to be analyzed.
* **Default:** `['description']`
### `automaticValidation`
* **Type:** `boolean`
* **Description:** A flag to enable or disable the sentiment analysis. If set to `false`, the plugin will add an info message to each record indicating that analysis is disabled but will not perform any analysis.
* **Default:** `false` - The analysis only runs if this is explicitly set to `true`.
### `errorMessages`
* **Type:** `object`
* **Description:** A configuration object for custom error messages. Note: This option is defined in the type interface but is not used in the current plugin implementation.
## Usage Examples
### Basic Usage
```javascript
import { FlatfileListener } from '@flatfile/listener';
import { enrichSentiment } from '@flatfile/plugin-enrich-sentiment';
export default function(listener) {
// Analyzes the 'description' field in the 'contacts' sheet.
listener.use(enrichSentiment({
sheetSlug: 'contacts',
textFields: ['description'],
automaticValidation: true
}));
}
```
```typescript
import { FlatfileListener } from '@flatfile/listener';
import { enrichSentiment } from '@flatfile/plugin-enrich-sentiment';
export default function(listener: FlatfileListener) {
// Analyzes the 'description' field in the 'contacts' sheet.
listener.use(enrichSentiment({
sheetSlug: 'contacts',
textFields: ['description'],
automaticValidation: true
}));
}
```
### Multiple Text Fields
```javascript
import { FlatfileListener } from '@flatfile/listener';
import { enrichSentiment } from '@flatfile/plugin-enrich-sentiment';
export default function(listener) {
// A more detailed configuration for a customer feedback sheet.
// Analyzes both 'comment' and 'feedback' fields.
listener.use(enrichSentiment({
sheetSlug: 'customer-feedback',
textFields: ['comment', 'feedback'],
automaticValidation: true
}));
}
```
```typescript
import { FlatfileListener } from '@flatfile/listener';
import { enrichSentiment } from '@flatfile/plugin-enrich-sentiment';
export default function(listener: FlatfileListener) {
// A more detailed configuration for a customer feedback sheet.
// Analyzes both 'comment' and 'feedback' fields.
listener.use(enrichSentiment({
sheetSlug: 'customer-feedback',
textFields: ['comment', 'feedback'],
automaticValidation: true
}));
}
```
### Advanced Custom Usage
```javascript
// This example shows how to use the exported helper functions
// inside a custom recordHook for more fine-grained control.
import { FlatfileListener } from '@flatfile/listener';
import { recordHook } from '@flatfile/plugin-record-hook';
import { performEnrichSentiment } from '@flatfile/plugin-enrich-sentiment';
export default function(listener) {
listener.use(recordHook('reviews', async (record) => {
const reviewText = String(record.get('review_text'));
// Use the exported helper function directly
const { error, result } = performEnrichSentiment(reviewText, 'review_text');
if (error) {
record.addWarning('review_text', error);
} else if (result) {
// Custom logic: only add a 'positive_review' flag
if (result.category === 'positive' && result.score > 3) {
record.set('positive_review', true);
record.addInfo('review_text', 'This is a highly positive review!');
}
}
return record;
}));
}
```
```typescript
// This example shows how to use the exported helper functions
// inside a custom recordHook for more fine-grained control.
import { FlatfileListener } from '@flatfile/listener';
import { recordHook } from '@flatfile/plugin-record-hook';
import { performEnrichSentiment } from '@flatfile/plugin-enrich-sentiment';
export default function(listener: FlatfileListener) {
listener.use(recordHook('reviews', async (record) => {
const reviewText = String(record.get('review_text'));
// Use the exported helper function directly
const { error, result } = performEnrichSentiment(reviewText, 'review_text');
if (error) {
record.addWarning('review_text', error);
} else if (result) {
// Custom logic: only add a 'positive_review' flag
if (result.category === 'positive' && result.score > 3) {
record.set('positive_review', true);
record.addInfo('review_text', 'This is a highly positive review!');
}
}
return record;
}));
}
```
### Using Utility Functions
```javascript
import { analyzeSentiment } from '@flatfile/plugin-enrich-sentiment';
const analysis = analyzeSentiment("Flatfile is an amazing tool!");
// analysis -> { score: 6, category: 'positive' }
```
```typescript
import { analyzeSentiment } from '@flatfile/plugin-enrich-sentiment';
const analysis = analyzeSentiment("Flatfile is an amazing tool!");
// analysis -> { score: 6, category: 'positive' }
```
## API Reference
### `enrichSentiment(config: EnrichSentimentConfig)`
The main entry point for the plugin. It creates and registers a `recordHook` with Flatfile that performs sentiment analysis on incoming records based on the provided configuration.
**Parameters:**
* `config` (EnrichSentimentConfig): An object containing configuration options
* `sheetSlug` (string): The slug of the sheet to target
* `textFields` (string\[]): An array of field keys to analyze. Defaults to `['description']`
* `automaticValidation` (boolean): Must be `true` to enable analysis
**Returns:** A function of type `(listener: FlatfileListener) => void` which is used to install the plugin into a Flatfile listener.
### `analyzeSentiment(text: string)`
A pure function that takes a string of text and returns its sentiment analysis.
**Parameters:**
* `text` (string): The input text to analyze
**Returns:** An object with the following properties:
* `score` (number): A numerical score representing the sentiment. Positive values are positive, negative values are negative
* `category` ('positive' | 'negative' | 'neutral'): A string category for the sentiment
### `performEnrichSentiment(value: string, field: string)`
A wrapper around `analyzeSentiment` that includes basic validation and formats the output for use within a record hook. It checks for empty input and structures the return value with either an error or a result object.
**Parameters:**
* `value` (string): The text value from the record field
* `field` (string): The name of the field being analyzed, used for creating informative messages
**Returns:** An object with one of two shapes:
* On success: `{ error: null, result: { score: number, category: string, message: string } }`
* On error (empty input): `{ error: string, result: null }`
## Troubleshooting
### Empty Text Fields
If a text field specified in the `textFields` configuration is empty or null, the plugin will not throw an error. Instead, it will add a warning to the record using `record.addWarning()` with a message like "No text found for sentiment analysis in field: \[field\_name]".
### Analysis Not Running
If `automaticValidation` is set to `false` or is omitted, the plugin will not perform any analysis or add any warnings. It will only add an informational message to the record stating that automatic analysis is disabled.
## Notes
### Default Behavior
* The plugin is **disabled by default**. You must explicitly set `automaticValidation: true` to enable sentiment analysis.
* If no `textFields` are specified, the plugin will default to analyzing the `description` field.
### Generated Fields
The plugin adds two new fields to each processed record for each analyzed text field:
* `_sentiment_score`: Contains the numerical sentiment score
* `_sentiment_category`: Contains the sentiment category ('positive', 'negative', or 'neutral')
Ensure your Sheet configuration or downstream systems can handle these new fields.
### Configuration Considerations
* The configuration property to enable the plugin is `automaticValidation` (boolean), not `autoAnalysis`.
* The `errorMessages` property in the `EnrichSentimentConfig` type is defined but not currently implemented in the plugin's logic.
* The plugin relies on the `sentiment` npm package for its analysis logic.
# Flatfile Space Configuration Plugin
Source: https://flatfile.com/docs/plugins/space-configure
Programmatically set up and configure a new Flatfile Space with workbooks, sheets, documents, and metadata from a single configuration object.
The `@flatfile/plugin-space-configure` plugin is designed to programmatically set up and configure a new Flatfile Space. It operates within a server-side listener, typically responding to the 'space:configure' event which is triggered when a new Space is created.
Its primary purpose is to define the entire structure of a Space from a single configuration object. This includes creating one or more Workbooks, defining their Sheets with specific fields and actions, setting Space-level properties like metadata and themes, and adding initial documents such as a welcome guide.
The plugin also includes a secondary utility, `dataChecklistPlugin`, which can automatically generate and maintain a "Data Checklist" document within the Space. This document provides a summary of all the fields and data types defined in the Space's workbooks, serving as a handy reference for users.
This plugin is essential for developers who want to create templatized, repeatable, or dynamically generated Space configurations for their users.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-space-configure
```
## Configuration & Parameters
The `configureSpace` function takes a `setup` object as its primary configuration. This can be a static object or a function that returns an object.
The `setup` object has the following properties:
### workbooks
* **Type:** `Partial[]`
* **Required:** Yes
* **Description:** An array of workbook configuration objects. Each object defines a workbook to be created in the Space. You can specify its name, sheets, actions, labels, etc. This is the core of the Space's data structure.
* **Default:** There is no default; you must provide at least an empty array `[]`.
### space
* **Type:** `Partial`
* **Required:** No
* **Description:** An object to configure the Space itself. You can set metadata (like themes), the primary workbook ID (though the plugin handles this automatically), and other space-level settings.
* **Default:** The plugin will automatically set the `primaryWorkbookId` to the ID of the first workbook created. Other properties are unset by default.
### documents
* **Type:** `Flatfile.DocumentConfig[]`
* **Required:** No
* **Description:** An array of document configuration objects. Each object creates a document in the Space's sidebar. This is useful for providing welcome text, instructions, or guides.
* **Default:** No documents are created by default.
### config
* **Type:** `object`
* **Required:** No
* **Description:** An object for plugin-specific configurations.
* `maintainWorkbookOrder` (boolean): If set to `true`, the plugin will configure the Space's sidebar to display the workbooks in the same order they are defined in the `workbooks` array.
* **Default:** `{ maintainWorkbookOrder: false }`
## Usage Examples
### Basic Usage
```javascript JavaScript
import { configureSpace } from '@flatfile/plugin-space-configure'
export default function (listener) {
listener.use(
configureSpace({
workbooks: [],
space: {
metadata: {
name: 'My Empty Space',
},
},
})
)
}
```
```typescript TypeScript
import type { FlatfileListener } from '@flatfile/listener'
import { configureSpace } from '@flatfile/plugin-space-configure'
export default function (listener: FlatfileListener) {
listener.use(
configureSpace({
workbooks: [],
space: {
metadata: {
name: 'My Empty Space',
},
},
})
)
}
```
### Configuration with Workbook and Sheets
```javascript JavaScript
import { configureSpace } from '@flatfile/plugin-space-configure'
export default function (listener) {
listener.use(
configureSpace({
workbooks: [
{
name: 'My First Workbook',
sheets: [
{
name: 'Contacts',
slug: 'contacts',
fields: [
{ key: 'firstName', type: 'string', label: 'First Name' },
{ key: 'lastName', type: 'string', label: 'Last Name' },
{ key: 'email', type: 'string', label: 'Email' },
],
},
],
},
],
space: {
metadata: {
theme: {
root: { primaryColor: 'blue' },
},
},
},
})
)
}
```
```typescript TypeScript
import type { FlatfileListener } from '@flatfile/listener'
import { configureSpace } from '@flatfile/plugin-space-configure'
export default function (listener: FlatfileListener) {
listener.use(
configureSpace({
workbooks: [
{
name: 'My First Workbook',
sheets: [
{
name: 'Contacts',
slug: 'contacts',
fields: [
{ key: 'firstName', type: 'string', label: 'First Name' },
{ key: 'lastName', type: 'string', label: 'Last Name' },
{ key: 'email', type: 'string', label: 'Email' },
],
},
],
},
],
space: {
metadata: {
theme: {
root: { primaryColor: 'blue' },
},
},
},
})
)
}
```
### Advanced Usage with Callback
```javascript JavaScript
import { configureSpace } from '@flatfile/plugin-space-configure'
import api from '@flatfile/api'
export default function (listener) {
listener.use(
configureSpace(
{
workbooks: [
{
name: 'Onboarding Workbook',
sheets: [{ name: 'Contacts', fields: [{ key: 'email', type: 'string' }] }],
},
],
documents: [
{
title: 'Welcome Guide',
body: 'Welcome! Follow the steps to get started.
',
},
],
},
async (event, workbookIds, tick) => {
// This code runs after the Space and Workbooks are created.
const { spaceId } = event.context
const workbookId = workbookIds[0]
await tick(60, 'Callback started')
console.log(`Space ${spaceId} and Workbook ${workbookId} are ready.`)
// You can now perform additional API calls, like adding records.
await api.records.insert(workbookId, [
{ email: { value: 'john.doe@example.com' } },
])
await tick(100, 'Callback complete')
}
)
)
}
```
```typescript TypeScript
import type { FlatfileListener } from '@flatfile/listener'
import { configureSpace } from '@flatfile/plugin-space-configure'
import api from '@flatfile/api'
export default function (listener: FlatfileListener) {
listener.use(
configureSpace(
{
workbooks: [
{
name: 'Onboarding Workbook',
sheets: [{ name: 'Contacts', fields: [{ key: 'email', type: 'string' }] }],
},
],
documents: [
{
title: 'Welcome Guide',
body: 'Welcome! Follow the steps to get started.
',
},
],
},
async (event, workbookIds, tick) => {
// This code runs after the Space and Workbooks are created.
const { spaceId } = event.context
const workbookId = workbookIds[0]
await tick(60, 'Callback started')
console.log(`Space ${spaceId} and Workbook ${workbookId} are ready.`)
// You can now perform additional API calls, like adding records.
await api.records.insert(workbookId, [
{ email: { value: 'john.doe@example.com' } },
])
await tick(100, 'Callback complete')
}
)
)
}
```
### Using Data Checklist Plugin
```javascript JavaScript
import { configureSpace, dataChecklistPlugin } from '@flatfile/plugin-space-configure'
export default function (listener) {
// Use configureSpace to set up the initial structure
listener.use(
configureSpace({
workbooks: [{ name: 'Contacts', sheets: [/* ... */] }],
})
)
// Use dataChecklistPlugin to generate a summary document
// This will run after the workbook is created by configureSpace
listener.use(dataChecklistPlugin())
}
```
```typescript TypeScript
import type { FlatfileListener } from '@flatfile/listener'
import { configureSpace, dataChecklistPlugin } from '@flatfile/plugin-space-configure'
export default function (listener: FlatfileListener) {
// Use configureSpace to set up the initial structure
listener.use(
configureSpace({
workbooks: [{ name: 'Contacts', sheets: [/* ... */] }],
})
)
// Use dataChecklistPlugin to generate a summary document
// This will run after the workbook is created by configureSpace
listener.use(dataChecklistPlugin())
}
```
### Error Handling Example
```javascript JavaScript
import { configureSpace } from '@flatfile/plugin-space-configure'
export default function (listener) {
listener.use(
configureSpace(
{ workbooks: [{ name: 'My Workbook' }] },
async (event, workbookIds, tick) => {
try {
await tick(75, 'Running custom logic');
// Simulate a failing operation
throw new Error('Custom API call failed!');
} catch (e) {
console.error('Error in callback:', e.message);
// Rethrow the error to fail the job
throw e;
}
}
)
)
}
```
```typescript TypeScript
import type { FlatfileListener } from '@flatfile/listener'
import { configureSpace } from '@flatfile/plugin-space-configure'
export default function (listener: FlatfileListener) {
listener.use(
configureSpace(
{ workbooks: [{ name: 'My Workbook' }] },
async (event, workbookIds, tick) => {
try {
await tick(75, 'Running custom logic');
// Simulate a failing operation
throw new Error('Custom API call failed!');
} catch (e) {
console.error('Error in callback:', e.message);
// Rethrow the error to fail the job
throw e;
}
}
)
)
}
```
## API Reference
### configureSpace(setupFactory, callback)
Creates a Flatfile listener plugin that listens for the `job:ready` event with the topic `space:configure`. When triggered, it configures the Space according to the provided setup. It handles creating workbooks, updating the space with a primary workbook, and creating documents.
**Parameters:**
1. `setupFactory`: `Setup | (event: FlatfileEvent) => Setup | Promise`
* The configuration for the space. This can be a static object or an async function that receives the event context and returns a configuration object.
2. `callback`: `(event: FlatfileEvent, workbookIds: string[], tick: TickFunction) => any | Promise` (Optional)
* An optional async function that is executed after the space and workbooks have been successfully configured. The job progress will be at 50% when the callback is invoked.
* `event`: The original FlatfileEvent that triggered the job.
* `workbookIds`: An array of strings containing the IDs of the workbooks that were created.
* `tick`: A function to update the job's progress percentage and message.
**Returns:** `(listener: FlatfileListener) => void`
### dataChecklistPlugin()
A utility plugin that creates and maintains a "Data Checklist" document in a Space. It listens for `workbook:created` and `workbook:updated` events, then inspects all workbooks and sheets to generate an HTML document summarizing the data model.
**Parameters:** None
**Returns:** `(listener: FlatfileListener) => void`
## Troubleshooting
If a Space fails to configure, check the "Jobs" log in the Flatfile Dashboard for the specific Space. The `space:configure` job will show an error message detailing what went wrong.
Common issues include:
* Malformed configuration objects (e.g., incorrect field types in a sheet definition)
* API permission errors - ensure your agent has the necessary permissions to create workbooks, documents, and update spaces
## Notes
### Special Considerations
* This plugin is designed to be used in a server-side listener environment
* The `configureSpace` plugin is specifically tied to the `space:configure` job topic. It will not run on other events
* The `dataChecklistPlugin` listens for `workbook:created` and `workbook:updated` events. It will automatically update its document if you add or change workbooks in the Space after the initial configuration
### Error Handling Patterns
* The plugin is built on top of `@flatfile/plugin-job-handler`, which provides robust job management. If any of the API calls made by the plugin fail, the job handler will catch the error, mark the job as 'failed', and provide the error message in the Flatfile UI
* For custom logic inside the optional `callback` function, you are responsible for your own error handling. It is best practice to use `try/catch` blocks. If you rethrow an error from the callback, the job will be marked as 'failed'
### Default Behavior
* The plugin will automatically set the `primaryWorkbookId` to the ID of the first workbook created
* Workbooks are displayed in the sidebar in the order they are created unless `maintainWorkbookOrder` is set to `true`
* No documents are created by default unless specified in the configuration
# Space Configure from Template
Source: https://flatfile.com/docs/plugins/space-configure-from-template
Automatically configure new Flatfile Spaces by cloning an existing Space Template, including workbooks, documents, and settings.
This plugin automates the setup of a new Flatfile Space by cloning an existing Space Template. It listens for the `space:configure` event, which is triggered when a new Space is created. Upon activation, the plugin finds the oldest Space Template associated with the current Flatfile App, and then copies its workbooks, documents, and settings (including metadata, actions, labels, etc.) into the new Space.
This is particularly useful for scenarios where you need to consistently provision new Spaces with a predefined structure and configuration, ensuring a uniform starting point for all users. It should be deployed in a server-side listener.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-space-configure-from-template
```
## Configuration & Parameters
The plugin is configured through a single optional parameter passed to the `configureSpaceFromTemplate` function.
### Parameters
| Parameter | Type | Description | Default |
| ---------- | --------------------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ---------------------------- |
| `callback` | `Function` (optional) | An asynchronous function executed after the Space and its Workbooks have been successfully created from the template. Receives the original `event`, an array of the new `workbookIds`, and a `tick` function to report progress. | No additional logic executed |
#### Callback Function Signature
```typescript
(event: FlatfileEvent, workbookIds: string[], tick: TickFunction) => any | Promise
```
* `event`: The FlatfileEvent object that triggered the job
* `workbookIds`: An array of strings, where each string is the ID of a workbook created in the new Space
* `tick`: A function to update the job's progress with signature `(progress: number, message?: string) => Promise`
## Usage Examples
### Basic Usage
```javascript JavaScript
import { configureSpaceFromTemplate } from "@flatfile/plugin-space-configure-from-template";
export default function (listener) {
listener.use(configureSpaceFromTemplate());
}
```
```typescript TypeScript
import { configureSpaceFromTemplate } from "@flatfile/plugin-space-configure-from-template";
import type { FlatfileListener } from "@flatfile/listener";
export default function (listener: FlatfileListener) {
listener.use(configureSpaceFromTemplate());
}
```
### Usage with Callback
```javascript JavaScript
import { configureSpaceFromTemplate } from "@flatfile/plugin-space-configure-from-template";
export default function (listener) {
listener.use(
configureSpaceFromTemplate(
async (event, workbookIds, tick) => {
const { spaceId } = event.context;
// Report progress
await tick(60, "Callback started.");
// Example: Log the IDs of the newly created workbooks
console.log(`Space ${spaceId} configured with workbooks: ${workbookIds.join(', ')}`);
// Perform other custom actions here...
await tick(99, "Callback complete!");
}
)
);
}
```
```typescript TypeScript
import type { FlatfileListener } from "@flatfile/listener";
import { configureSpaceFromTemplate } from "@flatfile/plugin-space-configure-from-template";
export default function (listener: FlatfileListener) {
listener.use(
configureSpaceFromTemplate(
async (event, workbookIds, tick) => {
const { spaceId } = event.context;
// Report progress
await tick(60, "Callback started.");
// Example: Log the IDs of the newly created workbooks
console.log(`Space ${spaceId} configured with workbooks: ${workbookIds.join(', ')}`);
// Perform other custom actions here...
await tick(99, "Callback complete!");
}
)
);
}
```
## Troubleshooting
### Common Errors
**"Space configuration failed"**
* Check the logs of your listener for a more detailed error message
* This could be due to missing API permissions, network issues, or the absence of a Space Template in your App
**"No space template found"**
* This error occurs if the plugin cannot find any Space Templates associated with the `appId` from the event context
* Ensure you have at least one Space configured as a template for the App you are using
## Notes
### Default Behavior
If no callback is provided, the plugin will configure the Space from the template and mark the job as complete. No additional custom logic is executed. When a callback is provided, it is invoked after the space configuration is complete, with the job's progress at 50%.
### Important Considerations
* **Deployment Environment**: This plugin must be deployed in a server-side listener environment, as it makes direct calls to the Flatfile API
* **Template Selection Logic**: The plugin automatically selects the oldest Space Template associated with the App (sorted by `createdAt` date). It is not possible to specify which template to use if multiple exist. The first one created will always be chosen
* **Permissions**: The agent or token used by the listener must have sufficient permissions to perform the following API actions: `spaces:list`, `spaces:update`, `workbooks:list`, `workbooks:create`, `documents:list`, and `documents:create`
* **Idempotency**: The plugin operates on the `space:configure` event, which typically runs only once for a given Space. Re-running the job may lead to duplicate workbooks or unexpected behavior
### Error Handling
Errors are caught within the `jobHandler`. A generic error is thrown to fail the job, and the specific error is logged to the server console for debugging. There is no built-in mechanism for custom error handling; failures are managed by the Flatfile Job system.
# SQL DDL to Flatfile Blueprint Converter
Source: https://flatfile.com/docs/plugins/sql-ddl-converter
Automatically create Flatfile Blueprints from SQL Data Definition Language (DDL) files to streamline database schema imports.
The SQL DDL Converter plugin automates the creation of a Flatfile Blueprint from a SQL Data Definition Language (DDL) file. Its primary purpose is to streamline the setup of a Flatfile Space by translating existing database schemas directly into Flatfile Workbooks and Sheets.
The plugin reads a provided SQL file (e.g., from a `CREATE TABLE` script), parses it to identify table structures, and converts each table into a corresponding Sheet configuration with appropriate fields. This is ideal for use cases where a data import process needs to match an existing database schema, saving significant manual configuration time. It is designed to be used in a server-side listener, typically on the `space:configure` event.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-convert-sql-ddl
```
## Configuration & Parameters
The plugin is configured through the `setupFactory` object passed to the `configureSpaceWithSqlDDL` function.
### setupFactory: SqlSetupFactory (required)
The main configuration object containing:
#### workbooks: PartialWorkbookConfig\[] (required)
An array of workbook configurations to create in the Space.
Each `PartialWorkbookConfig` object contains:
* **name** (string): The display name of the Workbook
* **source** (string, required): The relative file path to the SQL DDL file (relative to project root)
* **sheets** (PartialSheetConfig\[], required): An array of sheet configurations
Each `PartialSheetConfig` object contains:
* **name** (string): The display name of the Sheet
* **slug** (string, required): The identifier for the sheet that MUST exactly match the table name from the SQL DDL file
#### space (object, optional)
An object containing additional configuration for the Space itself, such as metadata. Follows the structure of `Flatfile.spaces.SpaceConfig`.
### Default Behavior
The plugin has no default configuration and requires the `setupFactory` object to be fully defined. It will:
* Read the file specified in the `source` property
* Parse it as MySQL DDL
* Map each table to a sheet configuration where the table name matches the sheet's `slug`
* Log errors to console and skip sheets if the `slug` doesn't correspond to any table found in the SQL file
* Continue processing remaining valid sheets
## Usage Examples
### Basic Usage
```javascript JavaScript
import { listener } from "@flatfile/listener";
import { configureSpaceWithSqlDDL } from "@flatfile/plugin-convert-sql-ddl";
listener.use(
configureSpaceWithSqlDDL({
workbooks: [
{
name: "Database Import",
source: "src/data/schema.sql",
sheets: [
{
name: "Users",
slug: "users", // Must match a table name in schema.sql
},
{
name: "Products",
slug: "products", // Must match a table name in schema.sql
},
],
},
],
})
);
```
```typescript TypeScript
import { listener } from "@flatfile/listener";
import { configureSpaceWithSqlDDL, SqlSetupFactory } from "@flatfile/plugin-convert-sql-ddl";
const setupFactory: SqlSetupFactory = {
workbooks: [
{
name: "Database Import",
source: "src/data/schema.sql",
sheets: [
{
name: "Users",
slug: "users", // Must match a table name in schema.sql
},
{
name: "Products",
slug: "products", // Must match a table name in schema.sql
},
],
},
],
};
listener.use(configureSpaceWithSqlDDL(setupFactory));
```
### Advanced Configuration with Callback
```javascript JavaScript
import { listener } from "@flatfile/listener";
import { configureSpaceWithSqlDDL } from "@flatfile/plugin-convert-sql-ddl";
listener.use(
configureSpaceWithSqlDDL(
{
workbooks: [
{
name: "Database Import",
source: "src/data/schema.sql",
sheets: [
{
name: "Users",
slug: "users",
},
],
},
],
space: {
metadata: {
sidebarTitle: "SQL Import Tool",
},
},
},
(event, workbookIds, tick) => {
// This function runs after the space is configured
console.log("Space configured successfully!");
console.log("Created workbook IDs:", workbookIds);
tick(100, "Configuration complete");
}
)
);
```
```typescript TypeScript
import { listener } from "@flatfile/listener";
import { configureSpaceWithSqlDDL, SqlSetupFactory } from "@flatfile/plugin-convert-sql-ddl";
import { FlatfileEvent } from "@flatfile/listener";
const setupFactory: SqlSetupFactory = {
workbooks: [
{
name: "Database Import",
source: "src/data/schema.sql",
sheets: [
{
name: "Users",
slug: "users",
},
],
},
],
space: {
metadata: {
sidebarTitle: "SQL Import Tool",
},
},
};
listener.use(
configureSpaceWithSqlDDL(
setupFactory,
(event: FlatfileEvent, workbookIds: string[], tick: any) => {
// This function runs after the space is configured
console.log("Space configured successfully!");
console.log("Created workbook IDs:", workbookIds);
tick(100, "Configuration complete");
}
)
);
```
### Custom Event Handler Usage
```javascript JavaScript
import { listener } from "@flatfile/listener";
import { configureSpaceWithSqlDDL } from "@flatfile/plugin-convert-sql-ddl";
listener.on("space:configure", async (event) => {
const { spaceId } = event.context;
const configure = configureSpaceWithSqlDDL({
workbooks: [
{
name: "Customer Data",
source: "data/my_schema.sql",
sheets: [
{ name: "Customers", slug: "customers" },
{ name: "Orders", slug: "orders" },
],
},
],
space: {
metadata: {
theme: {
root: {
primaryColor: "blue"
}
}
}
}
});
await configure(listener);
});
```
```typescript TypeScript
import { listener } from "@flatfile/listener";
import { configureSpaceWithSqlDDL, SqlSetupFactory } from "@flatfile/plugin-convert-sql-ddl";
import { FlatfileEvent } from "@flatfile/listener";
listener.on("space:configure", async (event: FlatfileEvent) => {
const { spaceId } = event.context;
const setupFactory: SqlSetupFactory = {
workbooks: [
{
name: "Customer Data",
source: "data/my_schema.sql",
sheets: [
{ name: "Customers", slug: "customers" },
{ name: "Orders", slug: "orders" },
],
},
],
space: {
metadata: {
theme: {
root: {
primaryColor: "blue"
}
}
}
}
};
const configure = configureSpaceWithSqlDDL(setupFactory);
await configure(listener);
});
```
## Troubleshooting
### Schema Not Found Error
If a sheet's `slug` doesn't match any table in the SQL file, you'll see a console error:
Schema not found for table name accounts
**Solution**: Ensure the `slug` property exactly matches the table name in your SQL DDL file.
### File Not Found Error
If the SQL file specified in the `source` property cannot be read, a hard error will be thrown.
**Solution**:
* Verify the file path is correct and relative to your project root
* Ensure the file exists and the process has read permissions
* Check that the file contains valid MySQL DDL syntax
### Example Error Handling
```javascript JavaScript
import { listener } from "@flatfile/listener";
import { configureSpaceWithSqlDDL } from "@flatfile/plugin-convert-sql-ddl";
// Example: schema.sql contains 'users' table but not 'accounts'
listener.use(
configureSpaceWithSqlDDL({
workbooks: [
{
name: "App Data",
source: "data/schema.sql",
sheets: [
{
name: "Users",
slug: "users", // This will succeed
},
{
name: "Accounts",
slug: "accounts", // This will fail and be skipped
},
],
},
],
})
);
```
```typescript TypeScript
import { listener } from "@flatfile/listener";
import { configureSpaceWithSqlDDL, SqlSetupFactory } from "@flatfile/plugin-convert-sql-ddl";
// Example: schema.sql contains 'users' table but not 'accounts'
const setupFactory: SqlSetupFactory = {
workbooks: [
{
name: "App Data",
source: "data/schema.sql",
sheets: [
{
name: "Users",
slug: "users", // This will succeed
},
{
name: "Accounts",
slug: "accounts", // This will fail and be skipped
},
],
},
],
};
listener.use(configureSpaceWithSqlDDL(setupFactory));
```
## Notes
### Important Considerations
* **Server-Side Only**: This plugin must be deployed in a server-side listener environment as it requires file system access to read SQL files
* **File Path**: The `source` property must be a file path relative to the project's root directory
* **SQL Dialect**: Currently only supports MySQL-compatible DDL syntax due to the hardcoded parser configuration
* **Exact Matching**: The `slug` property must exactly match table names in your SQL DDL file for proper field generation
* **Automatic Constraints**: The plugin automatically identifies `NOT NULL` columns and other constraints from DDL and translates them into Flatfile field constraints
* **Bundled Dependencies**: Uses `sql-ddl-to-json-schema` and `@flatfile/plugin-convert-json-schema` internally (no separate installation required)
### Default Behavior
* Reads and parses SQL files as MySQL DDL
* Maps tables to sheets based on exact slug matching
* Logs errors for unmatched slugs but continues processing valid sheets
* Automatically applies field constraints based on SQL column definitions
* Requires fully defined configuration with no built-in defaults
# Stored Constraints Plugin
Source: https://flatfile.com/docs/plugins/stored-constraints
Automatically applies server-side validation rules to records during the data import process using reusable JavaScript functions defined at the App level.
The Stored Constraints plugin automatically applies server-side validation rules to records during the data import process. These rules, called "stored constraints," are defined as reusable JavaScript functions at the App level within your Flatfile account. You can then apply these constraints to specific fields in your Sheet configurations.
When a user submits data, this plugin triggers. It fetches the stored constraint functions from the Flatfile API, finds the corresponding fields in the submitted data, and executes the validation logic for each record.
This plugin is ideal for enforcing complex or reusable business logic, such as validating a SKU against an external database, checking for valid country-state combinations, or implementing custom data quality rules that are shared across multiple Sheets or Blueprints.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-stored-constraints
```
## Configuration & Parameters
The `storedConstraint()` function itself does not accept any configuration parameters. The plugin's behavior is configured entirely within the Flatfile platform.
Configuration is managed in two places in Flatfile:
### App Constraints
* **Location**: In your Flatfile App's settings
* **Purpose**: This is where you define the reusable validation logic. Each stored constraint consists of a unique name (the "validator") and a JavaScript function body (the "function")
* **Example**: You could create a constraint named "is-valid-email" with the function body `(value, key, { record }) => { if (!/\S+@\S+\.\S+/.test(value)) { record.addError(key, 'Invalid email format.') } }`
### Sheet Field Constraints
* **Location**: In your Sheet or Blueprint configuration
* **Purpose**: You apply a stored constraint to a specific field by adding a constraint of type "stored" and referencing its name
* **`validator`** (string): The name of the App-level constraint to apply (e.g., "is-valid-email")
* **`config`** (object, optional): An arbitrary configuration object that gets passed to your validation function. This allows you to make a single stored constraint adaptable to different contexts
### Default Behavior
By default, the plugin does nothing if no fields in a Sheet are configured with a `type: 'stored'` constraint. When such constraints are present, the plugin will automatically trigger on every data submission (`commit:created` event) for all sheets and execute the corresponding validation logic.
## Usage Examples
### Basic Usage
```javascript JavaScript
// listener.js
import { storedConstraint } from '@flatfile/plugin-stored-constraints';
export default function (listener) {
// Registers the plugin to run on data submission
listener.use(storedConstraint());
}
```
```typescript TypeScript
// listener.ts
import type { FlatfileListener } from "@flatfile/listener";
import { storedConstraint } from '@flatfile/plugin-stored-constraints';
export default function (listener: FlatfileListener) {
// Registers the plugin to run on data submission
listener.use(storedConstraint());
}
```
### Configuration Example
```javascript JavaScript
// In your Flatfile Sheet/Blueprint configuration (JSON or UI):
/*
{
"key": "email",
"label": "Email",
"constraints": [
{
"type": "stored",
"validator": "is-valid-email-domain",
"config": {
"allowedDomain": "example.com"
}
}
]
}
*/
// In your Flatfile App's Stored Constraints section (UI):
/*
Validator Name: "is-valid-email-domain"
Function:
function(value, key, { record, config }) {
if (!value.endsWith('@' + config.allowedDomain)) {
record.addError(key, 'Email must be from the ' + config.allowedDomain + ' domain.');
}
}
*/
// Your listener code remains simple:
// listener.js
import { storedConstraint } from '@flatfile/plugin-stored-constraints';
export default function (listener) {
listener.use(storedConstraint());
}
```
```typescript TypeScript
// In your Flatfile Sheet/Blueprint configuration (JSON or UI):
/*
{
"key": "email",
"label": "Email",
"constraints": [
{
"type": "stored",
"validator": "is-valid-email-domain",
"config": {
"allowedDomain": "example.com"
}
}
]
}
*/
// In your Flatfile App's Stored Constraints section (UI):
/*
Validator Name: "is-valid-email-domain"
Function:
function(value, key, { record, config }) {
if (!value.endsWith('@' + config.allowedDomain)) {
record.addError(key, 'Email must be from the ' + config.allowedDomain + ' domain.');
}
}
*/
// Your listener code remains simple:
// listener.ts
import type { FlatfileListener } from "@flatfile/listener";
import { storedConstraint } from '@flatfile/plugin-stored-constraints';
export default function (listener: FlatfileListener) {
listener.use(storedConstraint());
}
```
### Advanced Usage with Dependencies
Stored constraint functions have access to a `deps` object containing helpful libraries. This example shows a constraint using the `validator` library to check for a valid email format.
```javascript JavaScript
// In your Flatfile App's Stored Constraints section (UI):
/*
Validator Name: "is-valid-email-advanced"
Function:
function(value, key, { record, deps }) {
// Access the 'validator' library from the deps object
if (!deps.validator.isEmail(value)) {
record.addError(key, 'Please enter a valid email address.');
}
}
*/
// Your listener code does not change:
// listener.js
import { storedConstraint } from '@flatfile/plugin-stored-constraints';
export default function (listener) {
listener.use(storedConstraint());
}
```
```typescript TypeScript
// In your Flatfile App's Stored Constraints section (UI):
/*
Validator Name: "is-valid-email-advanced"
Function:
function(value, key, { record, deps }) {
// Access the 'validator' library from the deps object
if (!deps.validator.isEmail(value)) {
record.addError(key, 'Please enter a valid email address.');
}
}
*/
// Your listener code does not change:
// listener.ts
import type { FlatfileListener } from "@flatfile/listener";
import { storedConstraint } from '@flatfile/plugin-stored-constraints';
export default function (listener: FlatfileListener) {
listener.use(storedConstraint());
}
```
### Error Handling Examples
#### User-facing Error
```javascript
// Stored constraint function defined in the Flatfile App UI
function(value, key, { record }) {
if (value === null || value === '') {
// This adds an error to the cell in the UI
record.addError(key, 'This field is required.');
}
}
```
#### Server-side Error
```javascript
// Stored constraint function defined in the Flatfile App UI
function(value, key, { record }) {
// This will throw an exception because 'badProperty' does not exist
const x = null;
x.badProperty = 'test';
}
// When the above constraint runs, the plugin will catch the TypeError
// and log a message like "Error executing constraint: Cannot set properties of null..."
// to the server console. No error will appear in the Flatfile UI.
```
## Troubleshooting
If your constraints are not running, verify that:
1. The `storedConstraint()` plugin is registered in your listener
2. The field in your Sheet configuration has a constraint with `type: 'stored'`
3. The `validator` name in the Sheet constraint exactly matches the name of a Stored Constraint defined in your App
4. The listener's environment has a valid Flatfile API key with sufficient permissions
If you see "Error executing constraint" in your server logs, check the corresponding stored constraint function for runtime errors.
## Notes
### Security Considerations
* The plugin uses `eval()` to execute the function strings stored in your App constraints. This means the constraint logic is executed dynamically. Ensure that only trusted administrators can create or edit stored constraint functions to avoid security risks.
### Requirements
* The plugin requires an active connection to the Flatfile API to fetch sheet configurations and app-level constraint definitions. Ensure your listener environment is configured with a valid Flatfile API key.
* The plugin triggers on the `commit:created` event, which runs server-side after a user submits their data.
### Error Handling
* The plugin includes a `try...catch` block around the execution of each constraint. If a constraint function throws an unhandled exception, the error is logged to the server-side console, and processing continues with the next record or field.
* For user-facing validation errors, the constraint function logic itself is responsible for calling `record.addError(fieldKey, 'Your error message')`. The plugin does not automatically convert thrown exceptions into record errors.
# String Validator Plugin
Source: https://flatfile.com/docs/plugins/string
A comprehensive plugin for validating and transforming string data during the Flatfile import process with configurable validation rules, pattern matching, and automatic data corrections.
The String Validator plugin for Flatfile provides a comprehensive way to validate and transform string data during the import process. It allows you to configure multiple validation rules for specified fields in a single, easy-to-use configuration object.
Its main purpose is to enforce data quality and consistency for string-based fields. Key features include validating strings against regular expression patterns (both common predefined patterns like email and URL, and custom ones), enforcing length constraints (min, max, or exact), and ensuring proper casing (lowercase, uppercase, or titlecase). The plugin can also automatically trim leading or trailing whitespace.
Use cases include cleaning user-submitted data, ensuring identifiers match a specific format (e.g., 'ABC-123'), validating contact information like emails and phone numbers, and standardizing the case of names or categories before they are imported into a system. If a value doesn't meet a validation rule but can be corrected (e.g., wrong case, extra whitespace), the plugin will automatically transform the value and notify the user of the change.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-validate-string
```
## Configuration & Parameters
The plugin is configured with a single `StringValidationConfig` object with the following properties:
### Required Parameters
An array of field keys (column names) to which the validation rules should be applied.
### Optional Parameters
The slug of a specific sheet to apply the validations to. Defaults to '\*\*' (applies to all sheets in the workbook).
A regular expression to validate the string against. You can use one of the predefined string keys for common patterns ('email', 'phone', 'url') or provide your own custom RegExp object.
The minimum allowed length for the string.
The maximum allowed length for the string.
The exact required length for the string.
Enforces a specific character case. If the input string does not match, it will be transformed, and a validation message will be added.
An object to control whitespace trimming. If `leading` is true, it trims whitespace from the start. If `trailing` is true, it trims from the end. If the string is trimmed, the value is transformed, and a message is added.
Controls whether an empty string is considered a valid value. By default, empty strings will generate a "Field cannot be empty" error.
An object to provide custom error messages for different validation types, overriding the default messages. The plugin provides default messages for each validation type (e.g., "Invalid format", "Minimum length is X").
## Usage Examples
### Basic Usage
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { validateString } from '@flatfile/plugin-validate-string';
const listener = new FlatfileListener();
listener.use(validateString({
fields: ['firstName', 'lastName'],
minLength: 2,
maxLength: 50,
caseType: 'titlecase'
}));
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { validateString } from '@flatfile/plugin-validate-string';
const listener = new FlatfileListener();
listener.use(validateString({
fields: ['firstName', 'lastName'],
minLength: 2,
maxLength: 50,
caseType: 'titlecase'
}));
```
### Email Validation
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { validateString } from '@flatfile/plugin-validate-string';
const listener = new FlatfileListener();
listener.use(validateString({
fields: ['email'],
pattern: 'email',
emptyStringAllowed: false,
errorMessages: {
pattern: 'Please provide a valid email address.',
length: 'The email address is too long.'
}
}));
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { validateString } from '@flatfile/plugin-validate-string';
const listener = new FlatfileListener();
listener.use(validateString({
fields: ['email'],
pattern: 'email',
emptyStringAllowed: false,
errorMessages: {
pattern: 'Please provide a valid email address.',
length: 'The email address is too long.'
}
}));
```
### Advanced Custom Pattern Validation
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { validateString } from '@flatfile/plugin-validate-string';
const listener = new FlatfileListener();
// Example: Validate a product SKU that must be in the format 'SKU-12345'
listener.use(validateString({
fields: ['product_sku'],
pattern: /^SKU-\d{5}$/,
caseType: 'uppercase',
errorMessages: {
pattern: 'SKU must be in the format SKU-XXXXX, where X is a digit.',
case: 'SKU must be in uppercase.'
}
}));
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { validateString } from '@flatfile/plugin-validate-string';
const listener = new FlatfileListener();
// Example: Validate a product SKU that must be in the format 'SKU-12345'
listener.use(validateString({
fields: ['product_sku'],
pattern: /^SKU-\d{5}$/,
caseType: 'uppercase',
errorMessages: {
pattern: 'SKU must be in the format SKU-XXXXX, where X is a digit.',
case: 'SKU must be in uppercase.'
}
}));
```
### Using the Utility Function
```javascript JavaScript
import { validateAndTransformString } from '@flatfile/plugin-validate-string';
const config = {
fields: ['email'],
pattern: 'email'
};
const result1 = validateAndTransformString('test@example.com', config);
// result1 -> { value: 'test@example.com', error: null }
const result2 = validateAndTransformString('not-an-email', config);
// result2 -> { value: 'not-an-email', error: 'Invalid format' }
```
```typescript TypeScript
import { validateAndTransformString } from '@flatfile/plugin-validate-string';
const config = {
fields: ['email'],
pattern: 'email'
};
const result1 = validateAndTransformString('test@example.com', config);
// result1 -> { value: 'test@example.com', error: null }
const result2 = validateAndTransformString('not-an-email', config);
// result2 -> { value: 'not-an-email', error: 'Invalid format' }
```
### Error Handling Example
```javascript JavaScript
import { validateAndTransformString } from '@flatfile/plugin-validate-string';
const config = {
fields: ['code'],
exactLength: 5
};
const result = validateAndTransformString('ABC', config);
if (result.error) {
console.log(`Validation failed: ${result.error}`);
// Logs: "Validation failed: Exact length must be 5"
}
```
```typescript TypeScript
import { validateAndTransformString } from '@flatfile/plugin-validate-string';
const config = {
fields: ['code'],
exactLength: 5
};
const result = validateAndTransformString('ABC', config);
if (result.error) {
console.log(`Validation failed: ${result.error}`);
// Logs: "Validation failed: Exact length must be 5"
}
```
## API Reference
### validateString(config)
The main entry point for the plugin. It configures and registers a `recordHook` that listens for new commits and applies the specified string validations to each record.
**Parameters:**
* `config` (StringValidationConfig): An object that defines the validation and transformation rules.
**Returns:** A function that takes a `FlatfileListener` instance and attaches the validation logic to it.
### validateAndTransformString(value, config)
An exported utility function that runs the validation and transformation logic on a single string value. It is used internally by the plugin but can be used for custom validation logic if needed.
**Parameters:**
* `value` (string): The input string to validate and transform.
* `config` (StringValidationConfig): The configuration object defining the rules to apply.
**Returns:** An object of type `ValidationResult` with two properties:
* `value` (string): The original or transformed string value.
* `error` (string | null): An error message string if any validation fails, otherwise `null`.
## Notes
### Default Behavior
* **Empty strings**: By default, empty strings are considered invalid. To allow them, you must explicitly set `emptyStringAllowed: true` in the configuration.
* **Sheet targeting**: When no `sheetSlug` is specified, the plugin applies to all sheets in the workbook using the default value '\*\*'.
* **Error messages**: The plugin provides default error messages for each validation type (e.g., "Invalid format", "Minimum length is X") which can be customized using the `errorMessages` configuration option.
### Important Considerations
* The plugin operates using the `@flatfile/plugin-record-hook`, which is a dependency. It processes records individually during the `commit:created` event.
* The plugin can modify data. When a transformation is applied (e.g., changing case or trimming whitespace), the record's value is updated with `record.set()`.
* By default, when a transformation is applied, the plugin also adds a validation message to the cell (e.g., "Field value must be in titlecase"). This informs the user that their original data was automatically corrected.
* The plugin handles `null` and `undefined` values by simply skipping them. Validation is only applied to defined string values.
* The plugin does not throw exceptions. Instead, it captures validation failures and adds them as errors to the corresponding record and field using `record.addError(field, message)`. These errors are then visible to the user in the Flatfile UI.
# Text Summarization and Key Phrase Extraction Plugin
Source: https://flatfile.com/docs/plugins/summarize
Automatically enriches Flatfile data by performing text summarization and key phrase extraction on specified text fields using natural language processing.
This plugin automatically enriches Flatfile data by performing text summarization and key phrase extraction on a specified text field. It uses the 'compromise' natural language processing library to analyze the content of a source field, generate a concise summary, and identify important phrases. The generated summary and key phrases are then populated into their designated fields.
This is useful for processing large blocks of text, such as product descriptions, user feedback, article content, or support tickets, to quickly get a high-level overview and identify key topics without manual review. The plugin is configured per sheet and can be customized to control the length of the generated summary.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-enrich-summarize
```
## Configuration & Parameters
The `summarize` function accepts a configuration object with the following parameters:
| Parameter | Type | Required | Default | Description |
| ------------------- | ------ | -------- | ------- | -------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `sheetSlug` | string | Yes | - | The slug of the sheet that the plugin should operate on |
| `contentField` | string | Yes | - | The API key of the field that contains the source text to be summarized |
| `summaryField` | string | Yes | - | The API key of the field where the generated summary should be stored |
| `keyPhrasesField` | string | Yes | - | The API key of the field where the extracted key phrases (as a comma-separated string) should be stored |
| `summaryLength` | number | No | 2 | An optional integer specifying the number of sentences the final summary should contain. This is overridden if `summaryPercentage` is also set |
| `summaryPercentage` | number | No | - | An optional number specifying the desired summary length as a percentage of the total sentences in the content. For example, a value of 30 would create a summary using 30% of the original sentences. This option takes precedence over `summaryLength` |
## Usage Examples
### Basic Usage
```javascript JavaScript
import { FlatfileListener } from "@flatfile/listener";
import { summarize } from "@flatfile/plugin-enrich-summarize";
const listener = new FlatfileListener();
listener.use(
summarize({
sheetSlug: "articles",
contentField: "full_text",
summaryField: "summary",
keyPhrasesField: "key_phrases",
})
);
```
```typescript TypeScript
import { FlatfileListener } from "@flatfile/listener";
import { summarize } from "@flatfile/plugin-enrich-summarize";
const listener = new FlatfileListener();
listener.use(
summarize({
sheetSlug: "articles",
contentField: "full_text",
summaryField: "summary",
keyPhrasesField: "key_phrases",
})
);
```
### Custom Summary Length
```javascript JavaScript
import { FlatfileListener } from "@flatfile/listener";
import { summarize } from "@flatfile/plugin-enrich-summarize";
const listener = new FlatfileListener();
// Configure the plugin to create a 3-sentence summary
listener.use(
summarize({
sheetSlug: "articles",
contentField: "full_text",
summaryField: "summary",
keyPhrasesField: "key_phrases",
summaryLength: 3
})
);
```
```typescript TypeScript
import { FlatfileListener } from "@flatfile/listener";
import { summarize } from "@flatfile/plugin-enrich-summarize";
const listener = new FlatfileListener();
// Configure the plugin to create a 3-sentence summary
listener.use(
summarize({
sheetSlug: "articles",
contentField: "full_text",
summaryField: "summary",
keyPhrasesField: "key_phrases",
summaryLength: 3
})
);
```
### Percentage-Based Summary
```javascript JavaScript
import { FlatfileListener } from "@flatfile/listener";
import { summarize } from "@flatfile/plugin-enrich-summarize";
const listener = new FlatfileListener();
// Configure the plugin to create a summary using 25% of the original sentences
listener.use(
summarize({
sheetSlug: "product_reviews",
contentField: "review_text",
summaryField: "review_summary",
keyPhrasesField: "review_tags",
summaryPercentage: 25
})
);
```
```typescript TypeScript
import { FlatfileListener } from "@flatfile/listener";
import { summarize } from "@flatfile/plugin-enrich-summarize";
const listener = new FlatfileListener();
// Configure the plugin to create a summary using 25% of the original sentences
listener.use(
summarize({
sheetSlug: "product_reviews",
contentField: "review_text",
summaryField: "review_summary",
keyPhrasesField: "review_tags",
summaryPercentage: 25
})
);
```
## Troubleshooting
* **Plugin not triggering**: Double-check that the `sheetSlug` in your configuration exactly matches the slug of your Sheet in Flatfile
* **Summaries not appearing**: Verify that the `contentField`, `summaryField`, and `keyPhrasesField` names are correct and that the `contentField` actually contains text
* **Data not being processed**: Ensure the `summaryField` is empty for records you expect to be processed, as the plugin will not overwrite existing summaries
## Notes
### Default Behavior
By default, the plugin generates a summary containing 2 sentences. The summarization logic takes sentences from the beginning and end of the source text to form the summary. The plugin will only generate a summary and key phrases if the target `summaryField` is empty; it will not overwrite existing data. If the source `contentField` for a record is empty, the plugin will add an error to that field and skip processing for that record.
### Limitations and Considerations
* The plugin's summarization algorithm is basic: it combines a number of sentences from the beginning and end of the text, separated by "...". It is not an abstractive or AI-based summarization
* The plugin will not overwrite data. If the `summaryField` already contains a value for a given record, that record will be skipped
* Key phrase extraction is based on a fixed grammatical pattern (`#Adjective? #Adjective? #Noun+`), which identifies phrases consisting of up to two optional adjectives followed by one or more nouns
* The plugin depends on the `compromise` NLP library, and the quality of the output is determined by its capabilities
* The plugin is designed to work on `string` field types
### Error Handling
The main error handling pattern is to check for preconditions on a per-record basis. If a precondition fails (e.g., the source text field is empty), the plugin adds a field-level error to the record using `record.addError()`. This provides direct feedback to the user in the Flatfile UI without halting the entire import process.
# Field Translation using Google Translate API
Source: https://flatfile.com/docs/plugins/translate
Automatically translate text content in specified fields using Google Translate API and create new fields with translated content
The Translate plugin for Flatfile integrates with the Google Translate API to automatically translate the text content of specified fields within a sheet. When records are processed, the plugin takes the values from designated source fields, translates them from a specified source language to a target language, and then adds the translated text into newly created fields. The new fields are automatically named by appending the target language code to the original field name (e.g., 'description' becomes 'description\_es'). This is useful for data localization, preparing multilingual product catalogs, or standardizing user-submitted content into a single language.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-convert-translate
```
## Configuration & Parameters
The plugin requires a configuration object with the following properties. All options are required and there are no default values:
| Parameter | Type | Description |
| ------------------- | ---------- | ---------------------------------------------------------------------------------------------------------------- |
| `sourceLanguage` | `string` | The IETF language code of the source text (e.g., 'en' for English) |
| `targetLanguage` | `string` | The IETF language code for the target language (e.g., 'es' for Spanish). This is also used to name the new field |
| `sheetSlug` | `string` | The slug of the sheet that the plugin should listen to. Records from other sheets will be ignored |
| `fieldsToTranslate` | `string[]` | An array of field keys (names) that should be translated. The plugin will only process these fields |
| `projectId` | `string` | Your Google Cloud project ID associated with the Google Translate API |
| `keyFilename` | `string` | The absolute or relative path to your Google Cloud service account key file (JSON) |
### Default Behavior
By default, the plugin does not do anything until it is configured and registered with a Flatfile listener. Once configured, it will listen for record processing events on the specified `sheetSlug`. For each record, it will read the text from the `fieldsToTranslate`, call the Google Translate API, and write the results to new fields. If a field to be translated is empty or null, it is skipped.
## Usage Examples
### Basic Usage
```javascript
import { FlatfileListener } from '@flatfile/listener';
import { convertTranslatePlugin } from '@flatfile/plugin-convert-translate';
export default function (listener) {
listener.use(
convertTranslatePlugin({
sourceLanguage: 'en',
targetLanguage: 'fr',
sheetSlug: 'contacts',
fieldsToTranslate: ['notes', 'comments'],
projectId: 'your-gcp-project-id',
keyFilename: './gcp-credentials.json',
})
);
}
```
```typescript
import { FlatfileListener } from '@flatfile/listener';
import { convertTranslatePlugin } from '@flatfile/plugin-convert-translate';
export default function (listener: FlatfileListener) {
listener.use(
convertTranslatePlugin({
sourceLanguage: 'en',
targetLanguage: 'fr',
sheetSlug: 'contacts',
fieldsToTranslate: ['notes', 'comments'],
projectId: 'your-gcp-project-id',
keyFilename: './gcp-credentials.json',
})
);
}
```
### Multiple Language Translation
To translate to multiple languages, use the plugin multiple times with different configurations:
```javascript
import { FlatfileListener } from '@flatfile/listener';
import { convertTranslatePlugin } from '@flatfile/plugin-convert-translate';
export default function (listener) {
// Translate to Spanish
listener.use(
convertTranslatePlugin({
sourceLanguage: 'en',
targetLanguage: 'es',
sheetSlug: 'products',
fieldsToTranslate: ['name', 'description'],
projectId: 'your-gcp-project-id',
keyFilename: './gcp-credentials.json',
})
);
// Translate to German
listener.use(
convertTranslatePlugin({
sourceLanguage: 'en',
targetLanguage: 'de',
sheetSlug: 'products',
fieldsToTranslate: ['name', 'description'],
projectId: 'your-gcp-project-id',
keyFilename: './gcp-credentials.json',
})
);
}
```
```typescript
import { FlatfileListener } from '@flatfile/listener';
import { convertTranslatePlugin } from '@flatfile/plugin-convert-translate';
export default function (listener: FlatfileListener) {
// Translate to Spanish
listener.use(
convertTranslatePlugin({
sourceLanguage: 'en',
targetLanguage: 'es',
sheetSlug: 'products',
fieldsToTranslate: ['name', 'description'],
projectId: 'your-gcp-project-id',
keyFilename: './gcp-credentials.json',
})
);
// Translate to German
listener.use(
convertTranslatePlugin({
sourceLanguage: 'en',
targetLanguage: 'de',
sheetSlug: 'products',
fieldsToTranslate: ['name', 'description'],
projectId: 'your-gcp-project-id',
keyFilename: './gcp-credentials.json',
})
);
}
```
### Manual Translation with translateRecord
```javascript
import { translateRecord, convertTranslatePlugin } from '@flatfile/plugin-convert-translate';
const config = {
sourceLanguage: 'en',
targetLanguage: 'ja',
sheetSlug: 'articles',
fieldsToTranslate: ['title'],
projectId: 'your-gcp-project-id',
keyFilename: './gcp-credentials.json',
};
// Initialize the client (required for translateRecord to work)
convertTranslatePlugin(listener, config);
// Manually process a record
record.set('title', 'Hello World');
const { record: updatedRecord, error } = translateRecord(record, config);
if (error) {
console.error('Translation failed:', error);
} else {
const translatedTitle = updatedRecord.get('title_ja');
console.log(translatedTitle); // Expected to be the Japanese translation
}
```
```typescript
import { FlatfileRecord } from '@flatfile/plugin-record-hook';
import { translateRecord, convertTranslatePlugin, TranslationConfig } from '@flatfile/plugin-convert-translate';
const config: TranslationConfig = {
sourceLanguage: 'en',
targetLanguage: 'ja',
sheetSlug: 'articles',
fieldsToTranslate: ['title'],
projectId: 'your-gcp-project-id',
keyFilename: './gcp-credentials.json',
};
// Initialize the client (required for translateRecord to work)
convertTranslatePlugin(listener, config);
// Manually process a record
record.set('title', 'Hello World');
const { record: updatedRecord, error } = translateRecord(record, config);
if (error) {
console.error('Translation failed:', error);
} else {
const translatedTitle = updatedRecord.get('title_ja');
console.log(translatedTitle); // Expected to be the Japanese translation
}
```
## API Reference
### convertTranslatePlugin
The main entry point for the plugin. This function registers a `recordHook` with the provided Flatfile listener, which will automatically translate specified fields on records belonging to the configured sheet.
**Signature:**
```typescript
convertTranslatePlugin(listener: FlatfileListener, config: TranslationConfig): void
```
**Parameters:**
* `listener`: FlatfileListener - An instance of a Flatfile listener to attach the hook to
* `config`: TranslationConfig - The configuration object for the plugin
**Returns:**
`void` - This function does not return a value. It modifies the listener instance directly.
### translateRecord
A standalone function that applies the translation logic to a single `FlatfileRecord`. It reads values from the fields specified in the config, translates them, and sets the translated values on new fields in the record.
**Signature:**
```typescript
translateRecord(record: FlatfileRecord, config: TranslationConfig): { record: FlatfileRecord; error?: string }
```
**Parameters:**
* `record`: FlatfileRecord - The record to process
* `config`: TranslationConfig - A configuration object specifying languages, fields, and credentials
**Returns:**
An object containing:
* `record`: FlatfileRecord - The record with new fields containing translated text
* `error?`: string - An error message if an issue occurred during processing
## Troubleshooting
### Error Handling
The plugin includes comprehensive error handling:
* **API Errors**: If the Google Translate API call fails, the error is logged to the console and the record won't be updated with translated values
* **Record-level Errors**: If an error occurs during processing of a single record, the plugin will attach a general error message to that specific record using `record.addError()`, making it visible in the Flatfile UI
* **No Text to Translate**: If a record has no text in any of the `fieldsToTranslate`, it is skipped without error
```javascript
// Error handling example
const { record: updatedRecord, error } = translateRecord(record, config);
if (error) {
// This adds the error to the record, which is visible to the user
updatedRecord.addError('general', error);
}
return updatedRecord;
```
```typescript
// Error handling example
const { record: updatedRecord, error } = translateRecord(record, config);
if (error) {
// This adds the error to the record, which is visible to the user
updatedRecord.addError('general', error);
}
return updatedRecord;
```
## Notes
### Requirements
* **Google Cloud Account**: A Google Cloud project with the Translate API enabled is required
* **Credentials**: You must provide a valid `projectId` and a path to a service account `keyFilename` with appropriate permissions
### Field Naming Convention
The plugin creates new fields for translated content by appending `_{targetLanguage}` to the original field name (e.g., `name` becomes `name_es`). Ensure this does not conflict with existing fields in your sheet configuration.
### Limitations
* **Field Types**: The plugin is designed to work with `string` fields. Behavior with other field types is undefined
* **Initialization**: The Google Translate client is initialized globally within the plugin's scope when `convertTranslatePlugin` is first called. Subsequent calls with different credentials will re-initialize the client
# View Mapped Data Plugin
Source: https://flatfile.com/docs/plugins/view-mapped
Automatically hides unmapped columns after the mapping stage to provide a cleaner data review experience
The View Mapped Data Plugin enhances the user experience during the data import process by automatically modifying the data grid view after users complete the column matching step. It hides all columns in the destination sheet that were not mapped by the user, providing a cleaner and more focused view for data validation.
The primary use case is to simplify the "Review" stage of an import by showing only columns that contain mapped data. This reduces visual clutter and helps users concentrate on the data that will actually be imported. The plugin also includes an option to retain required fields in the view, even if they were not mapped.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-view-mapped
```
## Configuration & Parameters
The plugin accepts an optional configuration object with the following parameter:
### `keepRequiredFields`
* **Type:** `boolean`
* **Default:** `false`
* **Description:** Controls whether fields marked as "required" in the sheet's configuration are kept visible after mapping, even if the user did not map any source data to them.
### Default Behavior
By default, the plugin will hide all unmapped fields, including those that are required. Only fields that the user explicitly mapped will remain visible in the data review step.
## Usage Examples
```javascript Basic Usage
import { FlatfileListener } from '@flatfile/listener'
import { viewMappedPlugin } from '@flatfile/plugin-view-mapped'
export default function(listener) {
listener.use(viewMappedPlugin())
}
```
```typescript Basic Usage
import { FlatfileListener } from '@flatfile/listener'
import { viewMappedPlugin } from '@flatfile/plugin-view-mapped'
export default function(listener: FlatfileListener) {
listener.use(viewMappedPlugin())
}
```
```javascript With Configuration
import { FlatfileListener } from '@flatfile/listener'
import { viewMappedPlugin } from '@flatfile/plugin-view-mapped'
export default function(listener) {
listener.use(viewMappedPlugin({
keepRequiredFields: true
}))
}
```
```typescript With Configuration
import { FlatfileListener } from '@flatfile/listener'
import { viewMappedPlugin } from '@flatfile/plugin-view-mapped'
export default function(listener: FlatfileListener) {
listener.use(viewMappedPlugin({
keepRequiredFields: true
}))
}
```
```javascript Complete Example
import { FlatfileListener } from '@flatfile/listener'
import { viewMappedPlugin } from '@flatfile/plugin-view-mapped'
export default function(listener) {
// Add the plugin to the listener
listener.use(viewMappedPlugin({ keepRequiredFields: true }))
// Define other listeners
listener.on('**', (event) => {
// Your other event handlers
})
}
```
```typescript Complete Example
import { FlatfileListener } from '@flatfile/listener'
import { viewMappedPlugin } from '@flatfile/plugin-view-mapped'
export default function(listener: FlatfileListener) {
// Add the plugin to the listener
listener.use(viewMappedPlugin({ keepRequiredFields: true }))
// Define other listeners
listener.on('**', (event) => {
// Your other event handlers
})
}
```
## Troubleshooting
### Error Handling
The plugin includes built-in error handling. If an issue occurs while updating the sheet view, the job will fail and log the error to the console:
```javascript
try {
// The plugin's internal job handler logic runs here
} catch (error) {
// If an error occurs, it is logged and a generic error is thrown to the UI
logError('@flatfile/plugin-view-mapped', JSON.stringify(error, null, 2))
throw new Error('plugins.viewMapped.error')
}
```
## Notes
### Requirements
* **`trackChanges` Required:** The workbook being processed must have the `trackChanges` setting enabled. The plugin checks for this setting and will abort its operation if it is not enabled. This prevents race conditions between different data processing hooks and the workbook update.
### Behavior Details
* **Foreground Job:** The plugin creates a `foreground` job to update the sheet view. This means the user interface will be blocked with a progress indicator while columns are being hidden, preventing user interaction with a stale view of the data.
* **Commit Synchronization:** The plugin includes logic to wait for all pending sheet commits to complete before attempting to update the workbook's field configuration. This prevents race conditions and ensures data integrity.
### API Reference
#### `viewMappedPlugin(options?: ViewMappedOptions)`
Initializes the View Mapped Data plugin and returns a configured listener that should be passed to `listener.use()`. The plugin listens for the completion of a `workbook:map` job and then creates a new job to update the destination sheet's configuration, hiding any unmapped columns.
**Parameters:**
* `options` (optional): Configuration object with `keepRequiredFields` boolean property
**Returns:** A listener function of type `(listener: FlatfileListener) => void`
# Webhook Egress
Source: https://flatfile.com/docs/plugins/webhook-egress
Send workbook data from Flatfile to external webhook endpoints for seamless integration with your systems
The Webhook Egress plugin acts as a bridge between Flatfile and external systems by sending workbook data to webhook endpoints. When a user triggers a specific action in the Flatfile UI (like a "Submit" button), this plugin gathers all data from all sheets within the current workbook, packages it into a single JSON payload, and sends it via an HTTP POST request to a pre-configured webhook URL.
This plugin is ideal for integrating Flatfile with custom backends, serverless functions, or third-party services that accept data via webhooks. Common use cases include triggering data processing pipelines, updating databases, or pushing data into CRM systems once users have finished cleaning and preparing their data in Flatfile.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-webhook-egress
```
## Configuration & Parameters
### Parameters
The name of the job or event that will trigger the egress process. This must correspond to an `operation` name defined in an action in your workbook configuration (e.g., 'workbook:submitActionFg').
The URL of the webhook endpoint where the workbook data will be sent. If not provided, the plugin will use the value from the `WEBHOOK_SITE_URL` environment variable.
### Default Behavior
When the `webhookUrl` parameter is not provided during initialization, the plugin will look for an environment variable named `WEBHOOK_SITE_URL` and use its value as the destination for the data. If neither the parameter nor the environment variable is set, the request will fail. The plugin sends data as a JSON payload in an HTTP POST request with `Content-Type: application/json`.
## Usage Examples
### Basic Usage
```javascript JavaScript
// listener.js
import { listener } from '@flatfile/listener';
import { webhookEgress } from '@flatfile/plugin-webhook-egress';
listener.use(webhookEgress('workbook:submitActionFg'));
// You also need to define the action in your workbook config
// workbook.config.js
const workbookConfig = {
name: 'My Workbook',
sheets: { /* ... */ },
actions: [
{
operation: 'submitActionFg',
mode: 'foreground',
label: 'Submit Data',
description: 'Send data to our system.',
primary: true,
},
],
};
```
```typescript TypeScript
// listener.ts
import { listener } from '@flatfile/listener';
import { webhookEgress } from '@flatfile/plugin-webhook-egress';
listener.use(webhookEgress('workbook:submitActionFg'));
// You also need to define the action in your workbook config
// workbook.config.ts
const workbookConfig = {
name: 'My Workbook',
sheets: { /* ... */ },
actions: [
{
operation: 'submitActionFg',
mode: 'foreground',
label: 'Submit Data',
description: 'Send data to our system.',
primary: true,
},
],
};
```
### Explicit Webhook URL Configuration
```javascript JavaScript
// listener.js
import { listener } from '@flatfile/listener';
import { webhookEgress } from '@flatfile/plugin-webhook-egress';
const MY_WEBHOOK_URL = 'https://api.myapp.com/data-ingest';
listener.use(webhookEgress('workbook:submitActionFg', MY_WEBHOOK_URL));
```
```typescript TypeScript
// listener.ts
import { listener } from '@flatfile/listener';
import { webhookEgress } from '@flatfile/plugin-webhook-egress';
const MY_WEBHOOK_URL: string = 'https://api.myapp.com/data-ingest';
listener.use(webhookEgress('workbook:submitActionFg', MY_WEBHOOK_URL));
```
### Webhook Response with Rejections
The plugin can process responses from your webhook that contain data rejections. If your webhook performs validation and finds errors, it can return them in a specific JSON format, and the plugin will display these errors in the Flatfile UI.
```javascript JavaScript
// Example webhook endpoint response format
// POST https://api.myapp.com/data-ingest
/*
{
"rejections": {
"id": "wb_abc123", // The workbook ID
"sheets": [
{
"sheetId": "us_sh_xyz456", // The sheet ID
"rejectedRecords": [
{
"id": "dev_rc_123", // The record ID
"values": [
{
"field": "email",
"message": "This email address is already in use."
}
]
}
]
}
]
}
}
*/
// No special client-side configuration needed for rejections
import { listener } from '@flatfile/listener';
import { webhookEgress } from '@flatfile/plugin-webhook-egress';
listener.use(webhookEgress('workbook:submit', 'https://webhook.site/your-unique-url'));
```
```typescript TypeScript
// Example webhook endpoint response format
// POST https://api.myapp.com/data-ingest
/*
{
"rejections": {
"id": "wb_abc123", // The workbook ID
"sheets": [
{
"sheetId": "us_sh_xyz456", // The sheet ID
"rejectedRecords": [
{
"id": "dev_rc_123", // The record ID
"values": [
{
"field": "email",
"message": "This email address is already in use."
}
]
}
]
}
]
}
}
*/
// No special client-side configuration needed for rejections
import { listener } from '@flatfile/listener';
import { webhookEgress } from '@flatfile/plugin-webhook-egress';
listener.use(webhookEgress('workbook:submit', 'https://webhook.site/your-unique-url'));
```
## Troubleshooting
### Common Error Messages
**"Failed to submit data to \[webhook]"**
* Check that the webhook URL is correct
* Verify the server is running and accessible
* Ensure the webhook is not returning error status codes (4xx or 5xx)
**"Error posting data to webhook"**
* Check for network connectivity issues
* Verify DNS resolution for the webhook URL
* Confirm the webhook URL is properly formatted
**"Request succeeded but failed to parse response from webhook"**
* Your webhook returns a 200 OK status but the response body is not valid JSON
* Ensure your endpoint returns `Content-Type: application/json`
* Return at least an empty JSON object `{}` if no specific response is needed
### Error Handling Patterns
The plugin handles errors internally and reports them as job outcomes:
* **Non-200 HTTP status codes**: The job completes with a failure message showing the status code
* **Network errors**: The job fails with a generic "Error posting data to webhook" message
* **Invalid JSON responses**: The job completes with a success status but warns about parsing issues
## Notes
### Requirements and Limitations
* An action must be configured in the workbook with an `operation` that matches the `job` string passed to the plugin
* The webhook endpoint must accept POST requests with `Content-Type: application/json`
* The entire workbook's data (all sheets and all records) is sent in a single request, which may result in large payloads for extensive workbooks
### Dependencies
* The plugin depends on `@flatfile/plugin-job-handler` for job lifecycle management (bundled with the plugin)
* Uses `@flatfile/util-response-rejection` for processing webhook rejection responses
* Utilizes `@flatfile/util-common` for error logging
### Validation Flow
The plugin supports a two-way validation flow where your webhook can return validation errors in the `rejections` format. These errors are automatically processed and displayed in the Flatfile UI, allowing users to see and correct data issues identified by your backend systems.
# Webhook Event Forwarder
Source: https://flatfile.com/docs/plugins/webhook-event-forwarder
Forward all Flatfile events to a webhook URL for language-agnostic integration with external systems
The Webhook Event Forwarder plugin for Flatfile is designed to capture all events within a Flatfile listener and forward them to a specified webhook URL. This allows developers to process Flatfile events in external systems, regardless of the programming language used at the endpoint.
Its primary purpose is to enable language-agnostic integration with backend services. Common use cases include:
* Triggering custom workflows in services like Zapier or Make
* Storing event data in an external database or data warehouse for analytics
* Notifying external systems of user actions within Flatfile (e.g., data submission, file upload)
* Implementing custom validation or data processing logic on a separate server
The plugin listens for all events (`**`), packages the full event object as a JSON payload, and sends it via an HTTP POST request.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-webhook-event-forwarder
```
## Configuration & Parameters
### webhookEventForward(url, callback?, options?)
| Parameter | Type | Required | Description |
| --------------- | ---------- | -------- | --------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| `url` | `string` | Yes | The webhook endpoint URL where the plugin will send all Flatfile events via an HTTP POST request |
| `callback` | `function` | No | An optional callback function that is executed after the webhook request is complete. Receives `data` (webhook response) and `event` (original Flatfile event) as arguments |
| `options` | `object` | No | Additional configuration options |
| `options.debug` | `boolean` | No | When set to `true`, logs any errors that occur during the webhook forwarding process to the console. Default: `false` |
### Default Behavior
By default, the plugin listens for every event (`**`) emitted by the Flatfile listener. For each event, it sends an HTTP POST request to the provided `url` with the event object serialized as a JSON string in the request body. If the webhook call fails, the error is suppressed unless `options.debug` is set to `true`. No action is taken on the webhook's response unless a `callback` function is provided.
## Usage Examples
### Basic Usage
```javascript JavaScript
import { listener } from './listener'; // Your listener instance
import { webhookEventForward } from '@flatfile/plugin-webhook-event-forwarder';
listener.use(webhookEventForward('https://webhook.site/your-unique-id'));
```
```typescript TypeScript
import { listener } from './listener'; // Your listener instance
import { webhookEventForward } from '@flatfile/plugin-webhook-event-forwarder';
listener.use(webhookEventForward('https://webhook.site/your-unique-id'));
```
### Configuration with Callback and Debug
```javascript JavaScript
import { listener } from './listener'; // Your listener instance
import { webhookEventForward } from '@flatfile/plugin-webhook-event-forwarder';
const myCallback = (response, event) => {
console.log(`Received response for event: ${event.topic}`);
console.log('Webhook response data:', response);
};
const options = {
debug: true
};
listener.use(webhookEventForward('https://webhook.site/your-unique-id', myCallback, options));
```
```typescript TypeScript
import { listener } from './listener'; // Your listener instance
import { webhookEventForward } from '@flatfile/plugin-webhook-event-forwarder';
import type { FlatfileEvent } from '@flatfile/listener';
const myCallback = (response: any, event: FlatfileEvent) => {
console.log(`Received response for event: ${event.topic}`);
console.log('Webhook response data:', response);
};
const options = {
debug: true
};
listener.use(webhookEventForward('https://webhook.site/your-unique-id', myCallback, options));
```
### Advanced Usage with Async Callback
```javascript JavaScript
import { listener } from './listener'; // Your listener instance
import { webhookEventForward } from '@flatfile/plugin-webhook-event-forwarder';
const handleWebhookResponse = async (response, event) => {
if (response?.error) {
console.error(`Failed to process event ${event.id} at webhook. Error: ${response.message}`);
// You could add logic here to retry the operation or send a notification
} else {
console.log(`Event ${event.id} successfully processed by webhook.`);
// You could update a record in your own database with the response
}
};
listener.use(webhookEventForward('https://api.my-service.com/flatfile-hook', handleWebhookResponse));
```
```typescript TypeScript
import { listener } from './listener'; // Your listener instance
import { webhookEventForward } from '@flatfile/plugin-webhook-event-forwarder';
import type { FlatfileEvent } from '@flatfile/listener';
const handleWebhookResponse = async (response: any, event: FlatfileEvent) => {
if (response?.error) {
console.error(`Failed to process event ${event.id} at webhook. Error: ${response.message}`);
// You could add logic here to retry the operation or send a notification
} else {
console.log(`Event ${event.id} successfully processed by webhook.`);
// You could update a record in your own database with the response
}
};
listener.use(webhookEventForward('https://api.my-service.com/flatfile-hook', handleWebhookResponse));
```
### Error Handling Example
```javascript JavaScript
import { listener } from './listener'; // Your listener instance
import { webhookEventForward } from '@flatfile/plugin-webhook-event-forwarder';
const myCallback = (response, event) => {
// Check for the error object passed by the plugin on failure
if (response && response.error === true) {
console.error(`Webhook forwarding failed for event: ${event.topic}`);
console.error(`Message: ${response.message}`);
console.error(`Details: ${response.data}`);
} else {
console.log(`Webhook for event ${event.topic} succeeded.`);
}
};
// Use a URL that is expected to fail to demonstrate error handling
// and enable debug logging to see the error in the console.
listener.use(webhookEventForward('https://httpstat.us/500', myCallback, { debug: true }));
```
```typescript TypeScript
import { listener } from './listener'; // Your listener instance
import { webhookEventForward } from '@flatfile/plugin-webhook-event-forwarder';
import type { FlatfileEvent } from '@flatfile/listener';
const myCallback = (response: any, event: FlatfileEvent) => {
// Check for the error object passed by the plugin on failure
if (response && response.error === true) {
console.error(`Webhook forwarding failed for event: ${event.topic}`);
console.error(`Message: ${response.message}`);
console.error(`Details: ${response.data}`);
} else {
console.log(`Webhook for event ${event.topic} succeeded.`);
}
};
// Use a URL that is expected to fail to demonstrate error handling
// and enable debug logging to see the error in the console.
listener.use(webhookEventForward('https://httpstat.us/500', myCallback, { debug: true }));
```
## Troubleshooting
To troubleshoot issues:
1. **Enable debug mode**: Set `debug: true` in the options object to see detailed error messages in your console logs.
2. **Verify URL accessibility**: Ensure that the `url` provided is correct and is publicly accessible from the environment where your Flatfile listener is running.
3. **Check webhook endpoint logs**: Review the server logs of your webhook endpoint to see if it's receiving requests and if it's returning any errors.
4. **Test with webhook.site**: Use a service like `webhook.site` to test if events are being sent correctly from the plugin.
## Notes
### Important Considerations
* **Event Volume**: The plugin listens for `**` (all events), which can generate a high volume of HTTP requests to your endpoint. Ensure your webhook receiver can handle the potential load.
* **Synchronous vs. Asynchronous**: The plugin sends the webhook request and continues. While the callback function can be `async` and is `await`ed, the plugin does not block the Flatfile event lifecycle.
* **Security**: The plugin sends the full event payload, which may contain sensitive data. Ensure your webhook endpoint is secure (uses HTTPS) and properly authenticated if necessary. The plugin itself does not add any authentication headers; this would need to be handled by the receiving endpoint or by wrapping the fetch call.
### Error Handling Patterns
The plugin uses a `try...catch` block to handle errors during the `fetch` call. If an error occurs (including non-ok HTTP responses like 4xx or 5xx), it will not crash the listener process. Instead, it constructs a standardized error object:
```json
{
"error": true,
"message": "Error received, please try again",
"data": "error details"
}
```
This object is then passed as the first argument to the `callback` function, if provided. If `options.debug` is `true`, the original error is also logged to the console.
# What3Words Converter Plugin
Source: https://flatfile.com/docs/plugins/what3words
Automatically convert What3Words addresses into standard geographic data including country codes, nearest places, and latitude/longitude coordinates using the official What3Words API.
The What3Words plugin for Flatfile automatically converts What3Words (W3W) addresses into standard geographic data. It listens for data commits, takes a W3W address from a specified field in a sheet, and uses the official What3Words API to fetch the corresponding country, nearest place, and latitude/longitude coordinates. The plugin then populates this information into other specified fields within the same record.
This is useful for any data import workflow where users provide locations using the What3Words system, and the application requires standard address components or geographic coordinates for mapping, logistics, or data normalization.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-convert-what3words
```
## Configuration & Parameters
The plugin is configured through the `convertWhat3words` function, which accepts a configuration object with the following parameters:
### Required Parameters
The key of the field in your sheet that contains the What3Words address to be converted.
The key of the field where the resulting country code (e.g., "GB") will be stored.
The key of the field where the name of the nearest place (e.g., "London") will be stored.
The key of the field where the resulting latitude coordinate will be stored.
The key of the field where the resulting longitude coordinate will be stored.
### Optional Parameters
The slug of the specific sheet you want this plugin to apply to. If not provided, the plugin will apply to all sheets in the workbook that have a field matching the `what3wordsField` configuration.
## Usage Examples
### Basic Usage
Apply the What3Words converter to a specific sheet:
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { convertWhat3words } from '@flatfile/plugin-convert-what3words';
export default function (listener) {
listener.use(
convertWhat3words({
sheetSlug: 'locations',
what3wordsField: 'w3w_address',
countryField: 'country_code',
nearestPlaceField: 'city',
latField: 'latitude',
longField: 'longitude'
})
);
}
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { convertWhat3words } from '@flatfile/plugin-convert-what3words';
export default function (listener: FlatfileListener) {
listener.use(
convertWhat3words({
sheetSlug: 'locations',
what3wordsField: 'w3w_address',
countryField: 'country_code',
nearestPlaceField: 'city',
latField: 'latitude',
longField: 'longitude'
})
);
}
```
### Apply to All Sheets
Configure the plugin to run on all sheets by omitting the `sheetSlug` option:
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { convertWhat3words } from '@flatfile/plugin-convert-what3words';
export default function (listener) {
// Applies to all sheets with a 'what3words' field
listener.use(
convertWhat3words({
what3wordsField: 'what3words',
countryField: 'country',
nearestPlaceField: 'nearestPlace',
latField: 'latitude',
longField: 'longitude'
})
);
}
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { convertWhat3words } from '@flatfile/plugin-convert-what3words';
export default function (listener: FlatfileListener) {
// Applies to all sheets with a 'what3words' field
listener.use(
convertWhat3words({
what3wordsField: 'what3words',
countryField: 'country',
nearestPlaceField: 'nearestPlace',
latField: 'latitude',
longField: 'longitude'
})
);
}
```
## Troubleshooting
### Conversions Not Working
* **API Key**: Verify that the `W3W_API_KEY` secret is correctly set in your Flatfile Space
* **Configuration**: Double-check that the `sheetSlug` and all field keys (`what3wordsField`, `countryField`, etc.) in your configuration exactly match the schema of your sheet
* **Data Format**: Ensure the field specified in `what3wordsField` contains valid, correctly formatted What3Words addresses
### Error Messages
If the What3Words API call fails for any reason (e.g., invalid W3W address, network error, invalid API key), the plugin will add a record-level error to the `what3wordsField` with the message "Invalid What3Words address". This error will be visible to the user in the Flatfile UI.
## Notes
### Environment Setup
You must add your What3Words API key as an environment secret named `W3W_API_KEY` in your Flatfile Space before using this plugin.
### Default Behavior
* **Sheet Targeting**: When `sheetSlug` is not provided, the plugin applies to all sheets in the workbook that contain a field matching the `what3wordsField` configuration
* **Event Trigger**: The plugin operates using `bulkRecordHook`, which runs when the user clicks "Continue" or "Submit" on a sheet (during the `commit:created` event)
* **Processing**: The conversion is not real-time as the user types; it occurs during the commit process
### Error Handling
* The plugin processes records individually within a batch
* A failure on one record will not stop the processing of other records
* Errors are logged to the console for debugging purposes
* User-facing errors are attached directly to the record and field that caused the issue
### Idempotency
The plugin processes records on each commit. If data is re-committed, the conversion will run again, potentially overwriting previously converted data.
# Excel Extractor
Source: https://flatfile.com/docs/plugins/xlsx-extractor
Parse various Excel file formats (.xls, .xlsx, .xlsm, .xlsb, .xltx, .xltm) and extract structured data with support for header detection, merged cells, and hierarchical spreadsheets.
The Excel Extractor plugin is designed to parse various Excel file formats (.xls, .xlsx, .xlsm, .xlsb, .xltx, .xltm) and extract structured data from them. When a user uploads a supported Excel file, this plugin automatically processes it, detects headers, and transforms the sheet data into a format that Flatfile can use. It offers extensive configuration for handling complex Excel files, including options for header detection, processing merged cells, and cascading data in hierarchical spreadsheets. This plugin is intended to be used in a server-side listener within the Flatfile platform.
## Installation
Install the Excel Extractor plugin using npm:
```bash
npm install @flatfile/plugin-xlsx-extractor
```
## Configuration & Parameters
The Excel Extractor accepts the following configuration options:
### Basic Options
| Parameter | Type | Default | Description |
| ---------------- | ------- | ----------- | ------------------------------------------------------------- |
| `raw` | boolean | `false` | Extract raw, underlying cell values instead of formatted text |
| `rawNumbers` | boolean | `false` | Extract raw numeric values instead of formatted numbers |
| `dateNF` | string | `undefined` | Specific date format string for interpreting dates |
| `chunkSize` | number | `10000` | Number of records to process in each batch |
| `parallel` | number | `1` | Number of chunks to process concurrently |
| `skipEmptyLines` | boolean | `false` | Skip rows that are entirely empty |
| `debug` | boolean | `false` | Enable verbose logging for troubleshooting |
### Advanced Options
| Parameter | Type | Default | Description |
| ------------------------ | ------- | ----------- | ------------------------------------------------------------- |
| `cascadeRowValues` | boolean | `false` | Fill empty cells with values from the cell above |
| `cascadeHeaderValues` | boolean | `false` | Fill empty header cells with values from the cell to the left |
| `headerDetectionOptions` | object | See below | Configure header row detection |
| `mergedCellOptions` | object | `undefined` | Define how to handle merged cells |
### Header Detection Options
Default configuration:
```javascript
{
algorithm: 'default',
rowsToSearch: 10
}
```
Available algorithms:
* `'default'` - Scans first 10 rows and selects the one with most non-empty cells
* `'explicitHeaders'` - Use when headers are explicitly defined
* `'specificRows'` - Define specific row numbers as headers
* `'dataRowAndSubHeaderDetection'` - Advanced detection for complex header structures
### Merged Cell Options
Configure treatment of merged cells with these options:
| Treatment | Description |
| ---------------- | ------------------------------------------------ |
| `applyToAll` | Copy merged cell value to all cells in the range |
| `applyToTopLeft` | Keep value only in top-left cell |
| `coalesce` | Keep first row/column and remove others |
| `concatenate` | Combine values with a separator |
## Usage Examples
```javascript JavaScript - Basic Usage
import { listener } from '@flatfile/listener';
import { ExcelExtractor } from '@flatfile/plugin-xlsx-extractor';
export default function (listener) {
listener.on('file:created', (event) => {
return listener.use(ExcelExtractor());
});
}
```
```typescript TypeScript - Basic Usage
import { listener } from '@flatfile/listener';
import { ExcelExtractor } from '@flatfile/plugin-xlsx-extractor';
import type { FlatfileListener } from '@flatfile/listener';
export default function (listener: FlatfileListener) {
listener.on('file:created', (event) => {
return listener.use(ExcelExtractor());
});
}
```
### Configuration Example
```javascript JavaScript - Configuration
import { listener } from '@flatfile/listener';
import { ExcelExtractor } from '@flatfile/plugin-xlsx-extractor';
export default function (listener) {
listener.on('file:created', (event) => {
return listener.use(
ExcelExtractor({
rawNumbers: true,
skipEmptyLines: true,
chunkSize: 50000,
debug: true,
})
);
});
}
```
```typescript TypeScript - Configuration
import { listener } from '@flatfile/listener';
import { ExcelExtractor } from '@flatfile/plugin-xlsx-extractor';
import type { FlatfileListener } from '@flatfile/listener';
export default function (listener: FlatfileListener) {
listener.on('file:created', (event) => {
return listener.use(
ExcelExtractor({
rawNumbers: true,
skipEmptyLines: true,
chunkSize: 50000,
debug: true,
})
);
});
}
```
### Advanced Header Detection
```javascript JavaScript - Specific Rows
import { listener } from '@flatfile/listener';
import { ExcelExtractor } from '@flatfile/plugin-xlsx-extractor';
export default function (listener) {
listener.on('file:created', (event) => {
return listener.use(
ExcelExtractor({
headerDetectionOptions: {
algorithm: 'specificRows',
rowNumbers: [2], // 0-based index for the third row
},
})
);
});
}
```
```typescript TypeScript - Specific Rows
import { listener } from '@flatfile/listener';
import { ExcelExtractor } from '@flatfile/plugin-xlsx-extractor';
import type { FlatfileListener } from '@flatfile/listener';
export default function (listener: FlatfileListener) {
listener.on('file:created', (event) => {
return listener.use(
ExcelExtractor({
headerDetectionOptions: {
algorithm: 'specificRows',
rowNumbers: [2], // 0-based index for the third row
},
})
);
});
}
```
### Merged Cell Handling
```javascript JavaScript - Merged Cells
import { listener } from '@flatfile/listener';
import { ExcelExtractor } from '@flatfile/plugin-xlsx-extractor';
export default function (listener) {
listener.on('file:created', (event) => {
return listener.use(
ExcelExtractor({
mergedCellOptions: {
acrossColumns: {
treatment: 'concatenate',
separator: ' ',
},
acrossRows: {
treatment: 'applyToAll',
},
},
})
);
});
}
```
```typescript TypeScript - Merged Cells
import { listener } from '@flatfile/listener';
import { ExcelExtractor } from '@flatfile/plugin-xlsx-extractor';
import type { FlatfileListener } from '@flatfile/listener';
export default function (listener: FlatfileListener) {
listener.on('file:created', (event) => {
return listener.use(
ExcelExtractor({
mergedCellOptions: {
acrossColumns: {
treatment: 'concatenate',
separator: ' ',
},
acrossRows: {
treatment: 'applyToAll',
},
},
})
);
});
}
```
### Direct Parser Usage
```javascript JavaScript - Direct Parser
import * as fs from 'fs';
import { excelParser } from '@flatfile/plugin-xlsx-extractor';
async function parseMyFile(filePath) {
try {
const fileBuffer = fs.readFileSync(filePath);
const workbookData = await excelParser(fileBuffer, {
skipEmptyLines: true,
});
console.log(workbookData['Sheet1'].data);
} catch (error) {
console.error('Failed to parse file:', error);
}
}
```
```typescript TypeScript - Direct Parser
import * as fs from 'fs';
import { excelParser } from '@flatfile/plugin-xlsx-extractor';
import type { WorkbookCapture } from '@flatfile/plugin-xlsx-extractor';
async function parseMyFile(filePath: string): Promise {
try {
const fileBuffer = fs.readFileSync(filePath);
const workbookData = await excelParser(fileBuffer, {
skipEmptyLines: true,
});
console.log(workbookData['Sheet1'].data);
return workbookData;
} catch (error) {
console.error('Failed to parse file:', error);
return null;
}
}
```
## Troubleshooting
### Large File Handling
```javascript JavaScript - Error Handling
import { excelParser } from '@flatfile/plugin-xlsx-extractor';
async function parseLargeFile(buffer) {
try {
const workbookData = await excelParser(buffer);
return workbookData;
} catch (error) {
if (error.message === 'plugins.extraction.fileTooLarge') {
console.error('The file is too large to be processed. Please convert it to CSV first.');
} else {
console.error('An unexpected error occurred:', error);
}
}
}
```
```typescript TypeScript - Error Handling
import { excelParser } from '@flatfile/plugin-xlsx-extractor';
import type { WorkbookCapture } from '@flatfile/plugin-xlsx-extractor';
async function parseLargeFile(buffer: Buffer): Promise {
try {
const workbookData = await excelParser(buffer);
return workbookData;
} catch (error: any) {
if (error.message === 'plugins.extraction.fileTooLarge') {
console.error('The file is too large to be processed. Please convert it to CSV first.');
} else {
console.error('An unexpected error occurred:', error);
}
return null;
}
}
```
### Debug Mode
Enable debug mode for detailed logging:
```javascript
ExcelExtractor({
debug: true
})
```
This provides detailed logs about the extraction process, including detected headers, rows processed, and configuration options being applied.
## Notes
### Default Behavior
* **Header Detection**: By default, the plugin scans the first 10 rows and selects the one with the most non-empty cells as the header row
* **Empty Rows**: Empty rows are included as empty records unless `skipEmptyLines` is set to `true`
* **Merged Cells**: Handled by the underlying library's default behavior unless custom options are provided
* **Chunk Processing**: Data is processed in batches of 10,000 records by default
### Important Considerations
* **Server-Side Only**: This plugin is designed to run in a server-side environment and should be used within a Flatfile listener
* **Duplicate Headers**: If a sheet contains duplicate column headers, the plugin automatically makes them unique by appending a suffix (e.g., 'Name', 'Name\_1', 'Name\_2')
* **Empty Headers**: Empty header cells are renamed to 'empty' (e.g., 'empty', 'empty\_1')
* **Trailing Empty Rows**: The parser automatically trims any fully empty rows from the end of a sheet before processing
* **Memory Limitations**: The plugin has built-in handling for extremely large files that can cause memory issues, throwing a user-friendly error when files are too large
### Cascading Behavior
* **Row Cascading**: When `cascadeRowValues` is enabled, empty cells are filled with values from the cell above. The cascade resets on a completely blank row or a new value in the column
* **Header Cascading**: When `cascadeHeaderValues` is enabled, empty header cells are filled with values from the cell to the left. The cascade resets on a blank column or a new value
# XML Extractor
Source: https://flatfile.com/docs/plugins/xml-extractor
A Flatfile plugin for parsing XML files, flattening nested structures and attributes, and converting them into tabular format for easy import into Flatfile Sheets.
The XML Extractor plugin processes .xml files uploaded to Flatfile by parsing XML data, flattening its nested structure and attributes, and converting it into a tabular format. The plugin automatically detects the main collection of records within the XML file and transforms each record into a row, making it ideal for importing structured XML data from legacy systems, APIs, or data exports.
## Installation
Install the XML Extractor plugin using npm:
```bash
npm install @flatfile/plugin-xml-extractor
```
## Configuration & Parameters
The XML Extractor accepts the following configuration options:
| Parameter | Type | Default | Description |
| ----------------- | -------- | ----------- | --------------------------------------------------------------------- |
| `separator` | string | `"/"` | Character used to separate keys when flattening nested XML elements |
| `attributePrefix` | string | `"#"` | Prefix added to keys derived from XML element attributes |
| `transform` | function | `undefined` | Function to transform each parsed data row before passing to Flatfile |
| `chunkSize` | number | `10000` | Number of records to process in each batch |
| `parallel` | number | `1` | Number of chunks to process concurrently |
| `debug` | boolean | `false` | Enable verbose logging for debugging |
### Default Behavior
By default, the plugin:
* Flattens nested elements using `"/"` as a separator (e.g., `USA ` becomes `"country/name"`)
* Prefixes attributes with `"#"` (e.g., `` becomes `"country#code"`)
* Processes files in chunks of 10,000 records with sequential processing
* Automatically infers headers from all unique keys found across records
## Usage Examples
### Basic Usage
```javascript JavaScript
import { listener } from '@flatfile/listener';
import { XMLExtractor } from '@flatfile/plugin-xml-extractor';
export default listener.on('file:created', (event) => {
// Only handle .xml files
if (event.context.file.ext !== 'xml') {
return;
}
return XMLExtractor()(event);
});
```
```typescript TypeScript
import { listener } from '@flatfile/listener';
import { XMLExtractor } from '@flatfile/plugin-xml-extractor';
import type { FlatfileEvent } from '@flatfile/listener';
export default listener.on('file:created', (event: FlatfileEvent) => {
// Only handle .xml files
if (event.context.file.ext !== 'xml') {
return;
}
return XMLExtractor()(event);
});
```
### Custom Configuration
```javascript JavaScript
import { listener } from '@flatfile/listener';
import { XMLExtractor } from '@flatfile/plugin-xml-extractor';
export default listener.on('file:created', (event) => {
if (event.context.file.ext !== 'xml') {
return;
}
return XMLExtractor({
separator: '_',
attributePrefix: '@',
transform: (row) => {
if (row.age) {
row.age = parseInt(row.age, 10);
}
return row;
},
chunkSize: 5000,
parallel: 2,
})(event);
});
```
```typescript TypeScript
import { listener } from '@flatfile/listener';
import { XMLExtractor } from '@flatfile/plugin-xml-extractor';
import type { FlatfileEvent } from '@flatfile/listener';
export default listener.on('file:created', (event: FlatfileEvent) => {
if (event.context.file.ext !== 'xml') {
return;
}
return XMLExtractor({
separator: '_',
attributePrefix: '@',
transform: (row: Record) => {
if (row.age) {
row.age = parseInt(row.age, 10);
}
return row;
},
chunkSize: 5000,
parallel: 2,
})(event);
});
```
### Using the Standalone Parser
```javascript JavaScript
import * as fs from 'fs';
import { xmlParser } from '@flatfile/plugin-xml-extractor';
const xmlBuffer = fs.readFileSync('path/to/your/file.xml');
const workbookData = xmlParser(xmlBuffer, { separator: '.' });
console.log(workbookData.Sheet1.headers);
console.log(workbookData.Sheet1.data[0]);
```
```typescript TypeScript
import * as fs from 'fs';
import { xmlParser } from '@flatfile/plugin-xml-extractor';
const xmlBuffer: Buffer = fs.readFileSync('path/to/your/file.xml');
const workbookData = xmlParser(xmlBuffer, { separator: '.' });
console.log(workbookData.Sheet1.headers);
console.log(workbookData.Sheet1.data[0]);
```
### Error Handling
```javascript JavaScript
import { xmlParser } from '@flatfile/plugin-xml-extractor';
try {
const emptyXml = Buffer.from('');
xmlParser(emptyXml);
} catch (e) {
console.error(e.message); // "No root xml object found"
}
```
```typescript TypeScript
import { xmlParser } from '@flatfile/plugin-xml-extractor';
try {
const emptyXml: Buffer = Buffer.from('');
xmlParser(emptyXml);
} catch (e: any) {
console.error(e.message); // "No root xml object found"
}
```
## Troubleshooting
### Fields Not Appearing as Expected
Check the `separator` and `attributePrefix` configuration. The default flattening behavior might not match your desired output format.
### Handling Repeated Tags
If tags within a record are repeated (e.g., multiple `` tags), the extractor creates indexed headers. The first email tag becomes the `email` field, the second becomes `email/0`, the third `email/1`, and so on.
### Debugging Row Data
Use the `transform` function to inspect intermediate parsed output:
```javascript JavaScript
XMLExtractor({
transform: (row) => {
console.log(row);
return row;
}
})
```
```typescript TypeScript
XMLExtractor({
transform: (row: Record) => {
console.log(row);
return row;
}
})
```
## Notes
* The plugin is designed for server-side environments and should be used in Flatfile listeners deployed on the Platform
* XML files must have a single root element containing an array of similar child elements (records)
* Not suitable for arbitrary or document-style XML with highly irregular structures
* Headers are automatically inferred from all unique keys found across all records
* The primary error condition is invalid XML structure - if no root element containing records is found, the parser throws "No root xml object found"
# YAML Schema to Flatfile Blueprint Converter
Source: https://flatfile.com/docs/plugins/yaml-schema
Automates the creation of a Flatfile Blueprint (Workbook and Sheets) from one or more YAML schema definitions, streamlining the setup of a Flatfile Space by converting existing data models into a Flatfile-compatible format.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-convert-yaml-schema
```
## Configuration & Parameters
### ModelToSheetConfig
Configuration object for each Sheet to be created from a YAML schema:
| Parameter | Type | Required | Description |
| --------------------- | ------ | -------- | ------------------------------------------------------------------------------------------------ |
| `sourceUrl` | string | Yes | The URL where the YAML schema file is hosted |
| `name` | string | No | A custom name for the generated Sheet. If not provided, the `title` from the YAML schema is used |
| Additional properties | - | No | Other `Flatfile.SheetConfig` properties (e.g., `actions`) can be included to customize the Sheet |
### Options
Optional configuration object for the Workbook and plugin behavior:
| Parameter | Type | Default | Description |
| ---------------- | --------------------- | ------- | -------------------------------------------------------------------------------------- |
| `workbookConfig` | PartialWorkbookConfig | - | Configuration for the generated Workbook (name, actions, etc.) |
| `debug` | boolean | `false` | When set to `true`, prints the generated Blueprint configuration object to the console |
### PartialWorkbookConfig
Configuration for the parent Workbook:
| Parameter | Type | Description |
| --------------------- | ------ | -------------------------------------------------------------------------------- |
| `name` | string | Custom name for the Workbook. Defaults to "YAML Schema Workbook" if not provided |
| `actions` | array | Custom actions for the Workbook |
| Additional properties | - | Other properties from `Flatfile.CreateWorkbookConfig` |
### Callback Function
Optional function executed after Space and Workbook configuration:
```typescript
(event: FlatfileEvent, workbookIds: string[], tick: TickFunction) => any | Promise
```
## Usage Examples
### Basic Usage
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener'
import { configureSpaceWithYamlSchema } from '@flatfile/plugin-convert-yaml-schema'
export default function (listener) {
listener.use(
configureSpaceWithYamlSchema([
{ sourceUrl: 'http://example.com/schemas/my-schema.yml' },
])
)
}
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener'
import { configureSpaceWithYamlSchema } from '@flatfile/plugin-convert-yaml-schema'
export default function (listener: FlatfileListener) {
listener.use(
configureSpaceWithYamlSchema([
{ sourceUrl: 'http://example.com/schemas/my-schema.yml' },
])
)
}
```
### Advanced Configuration
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener'
import { configureSpaceWithYamlSchema } from '@flatfile/plugin-convert-yaml-schema'
export default function (listener) {
listener.use(
configureSpaceWithYamlSchema(
[
{
sourceUrl: 'https://example.com/schemas/person.yml',
name: 'Custom Person Sheet',
},
{
sourceUrl: 'https://example.com/schemas/product.yml',
},
],
{
workbookConfig: {
name: 'My Custom Workbook',
actions: [
{
operation: 'submitActionFg',
mode: 'foreground',
label: 'Submit all data',
primary: true,
},
],
},
debug: true,
}
)
)
}
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener'
import {
configureSpaceWithYamlSchema,
ModelToSheetConfig,
PartialWorkbookConfig,
} from '@flatfile/plugin-convert-yaml-schema'
export default function (listener: FlatfileListener) {
listener.use(
configureSpaceWithYamlSchema(
[
{
sourceUrl: 'https://example.com/schemas/person.yml',
name: 'Custom Person Sheet',
},
{
sourceUrl: 'https://example.com/schemas/product.yml',
},
] as ModelToSheetConfig[],
{
workbookConfig: {
name: 'My Custom Workbook',
actions: [
{
operation: 'submitActionFg',
mode: 'foreground',
label: 'Submit all data',
primary: true,
},
],
} as PartialWorkbookConfig,
debug: true,
}
)
)
}
```
### Using Callback Function
```javascript JavaScript
import api from '@flatfile/api'
import { FlatfileListener } from '@flatfile/listener'
import { configureSpaceWithYamlSchema } from '@flatfile/plugin-convert-yaml-schema'
export default function (listener) {
const callback = async (event, workbookIds, tick) => {
const { spaceId } = event.context
await tick(50, 'Workbook configured, creating welcome document...')
await api.documents.create(spaceId, {
title: 'Getting Started',
body: 'Welcome! Your data templates are ready.
',
})
await tick(100, 'Document created')
}
listener.use(
configureSpaceWithYamlSchema(
[{ sourceUrl: 'https://example.com/schemas/person.yml' }],
{ debug: true },
callback
)
)
}
```
```typescript TypeScript
import api from '@flatfile/api'
import { FlatfileEvent, FlatfileListener } from '@flatfile/listener'
import { TickFunction } from '@flatfile/plugin-job-handler'
import { configureSpaceWithYamlSchema } from '@flatfile/plugin-convert-yaml-schema'
export default function (listener: FlatfileListener) {
const callback = async (
event: FlatfileEvent,
workbookIds: string[],
tick: TickFunction
) => {
const { spaceId } = event.context
await tick(50, 'Workbook configured, creating welcome document...')
await api.documents.create(spaceId, {
title: 'Getting Started',
body: 'Welcome! Your data templates are ready.
',
})
await tick(100, 'Document created')
}
listener.use(
configureSpaceWithYamlSchema(
[{ sourceUrl: 'https://example.com/schemas/person.yml' }],
{ debug: true },
callback
)
)
}
```
## Troubleshooting
### Debug Mode
The most effective troubleshooting tool is the `debug: true` option. When enabled, the complete generated Blueprint object is printed to the console where the listener is running. This allows you to inspect the exact Workbook, Sheet, and Field configuration that the plugin produced.
```javascript
listener.use(
configureSpaceWithYamlSchema(
[{ sourceUrl: 'https://example.com/schemas/person.yml' }],
{ debug: true }
)
)
```
### Common Error Scenarios
**Network/URL Issues**: If the plugin fails to fetch a schema from a `sourceUrl` (e.g., due to a network error, 404 Not Found, or invalid URL), it will throw an error with a message like:
Error: Error fetching external reference: API returned status 404: Not Found
**YAML Parsing Errors**: Errors during YAML parsing will bubble up and cause the configuration to fail. Ensure your YAML schema files are valid and properly formatted.
## Notes
### Requirements and Limitations
* This plugin must be used in a server-side listener environment
* It is designed to run on the `space:configure` event
* The YAML schema must be publicly accessible via the provided `sourceUrl`
* The plugin internally converts YAML to JSON and uses `@flatfile/plugin-convert-json-schema`, so supported YAML schema features are limited to what the underlying JSON Schema converter supports
### Default Behavior
* If no `name` is provided for a Sheet, the `title` from the YAML schema is used
* If no `name` is provided for the Workbook, it defaults to "YAML Schema Workbook"
* The `debug` option defaults to `false`
### Error Handling
The plugin's external fetching logic includes error handling for network failures and HTTP errors. When errors occur, they are caught by the Flatfile listener runner and reported in the corresponding Job's execution history.
# Zip Extractor
Source: https://flatfile.com/docs/plugins/zip-extractor
Automatically decompress .zip files uploaded to a Flatfile Space and extract their contents as individual files
The Zip Extractor plugin for Flatfile is designed to automatically decompress `.zip` files uploaded to a Flatfile Space. When a user uploads a zip archive, this plugin intercepts the file, extracts its contents, and uploads each individual file back into the same Space.
This is particularly useful for workflows where users need to bulk-upload multiple files (like CSVs, Excel files, or JSON files) at once. After this plugin runs, other extractor plugins (e.g., `@flatfile/plugin-xlsx-extractor`, `@flatfile/plugin-csv-extractor`) can then process the individual extracted files. The plugin intelligently ignores common system files and directories found in zip archives, such as those starting with a dot (`.`) or `__MACOSX`. It should be deployed in a server-side listener.
## Installation
Install the plugin using npm:
```bash
npm install @flatfile/plugin-zip-extractor
```
## Configuration & Parameters
The ZipExtractor plugin accepts an optional configuration object with the following parameter:
| Parameter | Type | Default | Description |
| --------- | --------- | ------- | ----------------------------------------------------------------------------------------------------------------------------------------------------- |
| `debug` | `boolean` | `false` | Enables verbose logging for troubleshooting. When set to `true`, it prints helpful information such as file paths of extracted files being processed. |
### Default Behavior
By default, the plugin runs silently without printing extra diagnostic messages to the console. The plugin automatically filters out system files and directories, including files whose names start with a `.` (e.g., `.DS_Store`) and files within the `__MACOSX` directory.
## Usage Examples
### Basic Usage
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { ZipExtractor } from '@flatfile/plugin-zip-extractor';
const listener = new FlatfileListener();
// Register the Zip Extractor with default options
listener.use(ZipExtractor());
// When a .zip file is uploaded to Flatfile, the plugin will automatically
// extract its contents and upload them back to the Space.
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { ZipExtractor } from '@flatfile/plugin-zip-extractor';
const listener = new FlatfileListener();
// Register the Zip Extractor with default options
listener.use(ZipExtractor());
// When a .zip file is uploaded to Flatfile, the plugin will automatically
// extract its contents and upload them back to the Space.
```
### With Debug Enabled
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { ZipExtractor } from '@flatfile/plugin-zip-extractor';
const listener = new FlatfileListener();
// Register the Zip Extractor with debugging enabled
listener.use(ZipExtractor({ debug: true }));
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { ZipExtractor } from '@flatfile/plugin-zip-extractor';
const listener = new FlatfileListener();
// Register the Zip Extractor with debugging enabled
listener.use(ZipExtractor({ debug: true }));
```
### Combined with Other Extractors
```javascript JavaScript
import { FlatfileListener } from '@flatfile/listener';
import { ZipExtractor } from '@flatfile/plugin-zip-extractor';
import { XlsxExtractor } from '@flatfile/plugin-xlsx-extractor';
const listener = new FlatfileListener();
// First, use the Zip Extractor to decompress any uploaded .zip files.
listener.use(ZipExtractor());
// Then, use the XLSX Extractor to process any .xlsx files,
// including those that were just extracted from a zip archive.
listener.use(XlsxExtractor());
```
```typescript TypeScript
import { FlatfileListener } from '@flatfile/listener';
import { ZipExtractor } from '@flatfile/plugin-zip-extractor';
import { XlsxExtractor } from '@flatfile/plugin-xlsx-extractor';
const listener = new FlatfileListener();
// First, use the Zip Extractor to decompress any uploaded .zip files.
listener.use(ZipExtractor());
// Then, use the XLSX Extractor to process any .xlsx files,
// including those that were just extracted from a zip archive.
listener.use(XlsxExtractor());
```
## Troubleshooting
The primary tool for troubleshooting is the `debug` option. Setting `debug: true` during initialization will cause the plugin to output detailed logs, including the paths of files it is extracting and uploading, which can help diagnose issues with the extraction process.
If an error occurs during the unzipping or file upload process, the plugin will catch the exception, log it, and fail the associated Flatfile Job with the message "Extraction failed" followed by the original error message. This provides clear feedback in the Flatfile UI.
## Notes
### Server-Side Operation
This plugin is intended to be run in a server-side listener environment, not in the browser.
### Event Handling
The plugin operates on the `file:created` event in Flatfile and specifically targets files with a `.zip` extension, ignoring all other file types.
### File Filtering
The plugin automatically filters out certain files from the archive to avoid clutter:
* Files whose names start with a `.` (e.g., `.DS_Store`)
* Files within the `__MACOSX` directory (common in archives created on macOS)
### Job Tracking
The plugin utilizes `@flatfile/plugin-job-handler` to provide feedback on the extraction progress within the Flatfile UI.
### Deprecated Export
The `zipExtractorPlugin` export is deprecated and kept for backward compatibility. Use `ZipExtractor` in all new implementations.
# CLI Reference
Source: https://flatfile.com/docs/reference/cli
Command line interface for developing, deploying, and managing Flatfile Agents
The Flatfile Command Line Interface (CLI) provides tools to develop, deploy, and manage [Listeners](/core-concepts/listeners) in your Flatfile environment.
Once listeners are deployed and hosted on Flatfile's secure cloud, they are
called Agents.
## Installation
```bash
npx flatfile@latest
```
## Configuration
### Authentication
The CLI requires your Flatfile API key and Environment ID, provided either in Environment variables (ideally in a `.env` file) or as command flags. You can find your API key and Environment ID in your Flatfile dashboard under "[API Keys and Secrets](https://platform.flatfile.com/dashboard/keys-and-secrets)".
**Recommended approach:** Use a `.env` file in your project root for secure,
convenient, and consistent authentication. If you're using Git, make sure to
add `.env` to your `.gitignore` file.
**Using `.env` file**
Create a `.env` file in your project root:
```bash
# .env file
FLATFILE_API_KEY="your_api_key_here"
FLATFILE_ENVIRONMENT_ID=your_environment_id_here
```
This approach keeps credentials out of your command history and makes it easy to switch between environments.
**Using command flags**
For one-off commands or CI/CD environments:
```bash
npx flatfile develop --token YOUR_API_KEY --env YOUR_ENV_ID
```
### Regional Servers
For improved performance and compliance, Flatfile supports regional deployments:
| Region | API URL |
| ------ | ---------------------------- |
| US | platform.flatfile.com/api |
| UK | platform.uk.flatfile.com/api |
| EU | platform.eu.flatfile.com/api |
| AU | platform.au.flatfile.com/api |
| CA | platform.ca.flatfile.com/api |
Set your regional URL in `.env`:
```bash
FLATFILE_API_URL=platform.eu.flatfile.com/api
```
Contact support to enable regional server deployment for your account.
## Development Workflow
Use `develop` to run your listener locally with live reloading
Use `deploy` to push your listener to Flatfile's cloud as an Agent
Use `agents` commands to list, download, or delete deployed agents
Use separate environments for development and production to avoid conflicts.
The CLI will warn you when working in an environment with existing agents.
## Commands
### develop
Run your listener locally with automatic file watching and live reloading.
```bash
npx flatfile develop [file-path]
```
**Options**
| Option | Description |
| ------------- | ---------------------------------------------------- |
| `[file-path]` | Path to listener file (auto-detects if not provided) |
| `--token` | Flatfile API key |
| `--env` | Environment ID |
**Features**
* Live reloading on file changes
* Real-time HTTP request logging
* Low-latency event streaming (10-50ms)
* Event handler visibility
**Example output**
```bash
> npx flatfile develop
✔ 1 environment(s) found for these credentials
✔ Environment "development" selected
ncc: Version 0.36.1
ncc: Compiling file index.js into CJS
✓ 427ms GET 200 https://platform.flatfile.com/api/v1/subscription 12345
File change detected. 🚀
✓ Connected to event stream for scope us_env_1234
▶ commit:created 10:13:05.159 AM us_evt_1234
↳ on(**, {})
↳ on(commit:created, {"sheetSlug":"contacts"})
```
***
### deploy
Deploy your listener as a Flatfile Agent.
```bash
npx flatfile deploy [file-path] [options]
```
**Options**
| Option | Description |
| -------------- | ---------------------------------------------------- |
| `[file-path]` | Path to listener file (auto-detects if not provided) |
| `--slug`, `-s` | Unique identifier for the agent |
| `--ci` | Disable interactive prompts for CI/CD |
| `--token` | Flatfile API key |
| `--env` | Environment ID |
**File detection order**
1. `./index.js`
2. `./index.ts`
3. `./src/index.js`
4. `./src/index.ts`
**Examples**
```bash
# Basic deployment
npx flatfile deploy
# Deploy with custom slug
npx flatfile deploy --slug my-agent
# CI/CD deployment
npx flatfile deploy ./src/listener.ts --ci
```
**Multiple agents**
Deploy multiple agents to the same environment using unique slugs:
```bash
npx flatfile deploy --slug agent-one
npx flatfile deploy --slug agent-two
```
Without a slug, the CLI updates your existing agent or creates one with slug
`default`.
***
### agents list
Display all deployed agents in your environment.
```bash
npx flatfile agents list
```
Shows each agent's:
* Agent ID
* Slug
* Deployment status
* Last activity
***
### agents download
Download a deployed agent's source code.
```bash
npx flatfile agents download
```
**Use cases**
* Examine deployed code
* Modify existing agents
* Back up source code
* Debug deployment issues
Use `agents list` to find the agent slug you need.
***
### agents delete
Remove a deployed agent.
```bash
npx flatfile agents delete
```
**Options**
| Option | Description |
| ------------------ | ---------------------------- |
| `--agentId`, `-ag` | Use agent ID instead of slug |
***
## Related Resources
* [Listeners](/core-concepts/listeners) - Core concept documentation
* [Events](/reference/events) - Event system reference
# Event Reference
Source: https://flatfile.com/docs/reference/events
Complete reference for all Flatfile events, their payloads, and when they are triggered
Flatfile emits events throughout the data import lifecycle, allowing your applications to respond to user actions, system changes, and processing results. This reference documents all available events, their payloads, and when they are triggered.
## Event Structure
All Flatfile events follow a consistent structure. Optional fields may be included depending on the event's Domain ([workbook-level](#workbook-events) events, for instance, won't have a sheetId)
```typescript
interface FlatfileEvent {
id: string;
topic: string;
domain: string;
context: {
environmentId: string;
spaceId?: string;
workbookId?: string;
sheetId?: string;
jobId?: string;
fileId?: string;
[key: string]: any;
};
payload: any;
attributes?: any;
createdAt: string;
}
```
## Listening and Reacting to Events
To respond to these events, you'll need to create a [Listener](/core-concepts/listeners) that subscribes to the specific events your application needs to handle.
## Job Events
Job events are triggered when background tasks and operations change state.
### job:created
Triggered when a new job is first created. Some jobs will enter an optional
planning state at this time. A job with 'immediate' set to true will skip the
planning step and transition directly to 'ready.'
```typescript
{
domain: string // event domain (e.g., "space", "workbook")
operation: string // operation name (e.g., "configure")
job: string // domain:operation format (e.g., "space:configure")
status: string // job status
info?: string // optional info message
isPart: boolean // whether this is a sub-job
input?: any // job input data
}
```
```typescript
{
accountId: string
environmentId: string
spaceId?: string
jobId: string
actorId: string
}
```
### job:ready
Triggered when a job is ready for execution by your listener. Either the job
has a complete plan of work or the job is configured to not need a plan. This
is the only event most job implementations will care about. Once a ready job
is acknowledged by a listener, it transitions into an executing state.
```typescript
{
domain: string // event domain (e.g., "space", "workbook")
operation: string // operation name (e.g., "configure")
job: string // domain:operation format (e.g., "space:configure")
status: string // job status
info?: string // optional info message
isPart: boolean // whether this is a sub-job
input?: any // job input data
}
```
```typescript
{
accountId: string
environmentId: string
spaceId?: string
jobId: string
actorId: string
}
```
```typescript
listener.filter({ job: "*" }, (configure) => {
configure.on("job:ready", async (event) => {
const { jobId } = event.context
// Handle any job that becomes ready
await processJob(jobId)
})
})
```
### job:scheduled
Triggered when a job is scheduled to run at a future time
```typescript
{
domain: string // event domain (e.g., "space", "workbook")
operation: string // operation name (e.g., "configure")
job: string // domain:operation format (e.g., "space:configure")
status: string // job status (scheduled)
info?: string // optional info message
isPart: boolean // whether this is a sub-job
input?: any // job input data
}
```
```typescript
{
accountId: string
environmentId: string
spaceId?: string
jobId: string
actorId: string
}
```
### job:updated
Triggered when a job is updated. For example, when a listener updates the
state or progress of the job. The event will emit many times as the listener
incrementally completes work and updates the job.
```typescript
{
domain: string // event domain (e.g., "space", "workbook")
operation: string // operation name (e.g., "configure")
job: string // domain:operation format (e.g., "space:configure")
status: string // job status
info?: string // optional info message
isPart: boolean // whether this is a sub-job
input?: any // job input data
}
```
```typescript
{
accountId: string
environmentId: string
spaceId?: string
jobId: string
actorId: string
}
```
### job:completed
Triggered when a job has completed successfully
```typescript
{
domain: string // event domain (e.g., "space", "workbook")
operation: string // operation name (e.g., "configure")
job: string // domain:operation format (e.g., "space:configure")
status: string // job status
info?: string // optional info message
isPart: boolean // whether this is a sub-job
input?: any // job input data
}
```
```typescript
{
accountId: string
environmentId: string
spaceId?: string
jobId: string
actorId: string
}
```
### job:outcome-acknowledged
Triggered to trigger workflow actions after the user has acknowledged that the
job has completed or failed. Background jobs will skip this step.
```typescript
{
domain: string // event domain (e.g., "space", "workbook")
operation: string // operation name (e.g., "configure")
job: string // domain:operation format (e.g., "space:configure")
status: string // job status
info?: string // optional info message
isPart: boolean // whether this is a sub-job
input?: any // job input data
}
```
```typescript
{
accountId: string
environmentId: string
spaceId?: string
workbookId?: string
sheetId?: string
jobId: string
actorId: string
}
```
### job:parts-completed
Triggered when all parts of a multi-part job have completed processing
```typescript
{
domain: string // event domain (e.g., "space", "workbook")
operation: string // operation name (e.g., "configure")
job: string // domain:operation format (e.g., "space:configure")
status: string // job status (parts-completed)
info?: string // optional info message
isPart: boolean // whether this is a sub-job
input?: any // job input data
parts: Array<{ // completed parts information
partId: string
status: string
completedAt: string
}>
}
```
```typescript
{
accountId: string
environmentId: string
spaceId?: string
jobId: string
actorId: string
}
```
### job:failed
Triggered when a job fails
```typescript
{
domain: string // event domain (e.g., "space", "workbook")
operation: string // operation name (e.g., "configure")
job: string // domain:operation format (e.g., "space:configure")
status: string // job status
info?: string // optional info message
isPart: boolean // whether this is a sub-job
input?: any // job input data
}
```
```typescript
{
accountId: string
environmentId: string
spaceId?: string
workbookId?: string
sheetId?: string
jobId: string
actorId: string
}
```
### job:deleted
Triggered when a job is deleted
```typescript
{
accountId: string
environmentId: string
spaceId?: string
jobId: string
actorId: string
}
```
## Program Events
Program events are triggered when mapping programs and transformations change state.
### program:created
Triggered when a new mapping program is created
```typescript
{}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId: string
sheetId: string
actorId: string
}
```
### program:updated
Triggered when a mapping program is updated
```typescript
{}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId: string
sheetId: string
actorId: string
}
```
### program:recomputing
Triggered when a mapping program begins recomputing its transformations
```typescript
{}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId: string
sheetId: string
actorId: string
}
```
### program:recomputed
Triggered when a mapping program has finished recomputing its transformations
```typescript
{}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId: string
sheetId: string
actorId: string
}
```
## File Events
File events are triggered when files are uploaded, processed, or modified.
### file:created
Triggered when a file upload begins or a new file is created
```typescript
{
accountId: string
environmentId: string
spaceId: string
fileId: string
actorId: string
}
```
### file:updated
Triggered when a file is updated. For example, when a file has been extracted
into a workbook
```typescript
{
accountId: string
environmentId: string
spaceId: string
fileId: string
actorId: string
}
```
```typescript
{
status: string
workbookId?: string
}
```
### file:deleted
Triggered when a file is deleted
```typescript
{
accountId: string
environmentId: string
spaceId: string
fileId: string
actorId: string
}
```
### file:expired
Triggered when a file is expired
```typescript
{
accountId: string
environmentId: string
spaceId: string
fileId: string
}
```
## Record Events
Record events are triggered when data records are created, updated, or deleted.
### records:created
Triggered when new records are added to a sheet
```typescript
{
sheetId: string
recordIds: string[]
recordCount: number
}
```
```typescript
{
environmentId: string
spaceId: string
workbookId: string
sheetId: string
}
```
### records:updated
**A note on `commit:created` vs `records:updated`**
You might think you want to listen to `records:updated` for data processing, but **[`commit:created`](#commit-events) is the recommended choice for most automation**.
A **commit** is like a "git commit" but for data - a versioned snapshot of all changes that occurred together in a single operation. This provides complete transaction context and better performance when processing multiple record changes.
Flatfile's own Plugins ([Record Hooks](/plugins/record-hook), [Constraints](/plugins/constraints), [Autocast](/plugins/autocast), etc.) all use `commit:created`. Reserve `records:updated` only for real-time per-record feedback scenarios.
Triggered when existing records are modified
```typescript
{
sheetId: string
recordIds: string[]
changes: Array<{
recordId: string
fieldKey: string
previousValue: any
newValue: any
}>
}
```
### records:deleted
Triggered when records are deleted from a sheet
```typescript
{
sheetId: string
recordIds: string[]
recordCount: number
}
```
## Sheet Events
Sheet events are triggered when sheets are created, modified, or when sheet-level operations occur.
### sheet:calculation-updated
Triggered when sheet calculations or formulas are updated
```typescript
{
sheetId: string
workbookId: string
calculations: Array<{
field: string
formula: string
updated: boolean
}>
}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId: string
sheetId: string
actorId: string
}
```
### sheet:counts-updated
Triggered when record counts for a sheet are updated
```typescript
{
sheetId: string
workbookId: string
counts: {
total: number
valid: number
error: number
updated: number
}
}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId: string
sheetId: string
actorId: string
}
```
### sheet:created
Triggered when a new sheet is created
```typescript
{
sheetId: string
workbookId: string
name: string
slug: string
fieldCount: number
}
```
### sheet:updated
Triggered when a sheet Blueprint or configuration is modified
```typescript
{
sheetId: string
workbookId: string
changes: Array<{
type: "field_added" | "field_removed" | "field_updated" | "config_updated"
details: any
}>
}
```
### sheet:deleted
Triggered when a sheet is deleted
```typescript
{
sheetId: string
workbookId: string
name: string
slug: string
}
```
## Workbook Events
Workbook events are triggered for workbook-level operations and changes.
### workbook:created
Triggered when a new workbook is created
```typescript
{
workbookId: string
spaceId: string
name: string
namespace?: string
sheetCount: number
}
```
### workbook:updated
Triggered when a workbook is modified
```typescript
{
workbookId: string
spaceId: string
changes: Array<{
type: "sheet_added" | "sheet_removed" | "config_updated"
details: any
}>
}
```
### workbook:deleted
Triggered when a workbook is deleted
```typescript
{
workbookId: string
spaceId: string
}
```
### workbook:expired
Triggered when a workbook expires
```typescript
{
workbookId: string
spaceId: string
}
```
## Space Events
Space (Project) events are triggered for project lifecycle changes.
### space:created
Triggered when a new project (space) is created
```typescript
{
spaceId: string
environmentId: string
name: string
appId?: string
}
```
### space:updated
Triggered when a project is modified
```typescript
{
spaceId: string
environmentId: string
changes: Array<{
field: string
previousValue: any
newValue: any
}>
}
```
### space:deleted
Triggered when a space is deleted
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId: string
actorId: string
}
```
### space:expired
Triggered when a space is expired
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId: string
}
```
### space:archived
Triggered when a space is archived
```typescript
{
accountId: string
environmentId: string
spaceId: string
}
```
### space:guestAdded
Triggered when a guest is added
```typescript
{
actorId: string
accountId: string
environmentId: string
spaceId: string
}
```
### space:guestRemoved
Triggered when a guest's access is revoked from a space
```typescript
{
actorId: string
accountId: string
environmentId: string
spaceId: string
}
```
### space:unarchived
Triggered when a space is unarchived and restored to active status
```typescript
{}
```
```typescript
{
actorId: string
accountId: string
environmentId: string
spaceId: string
}
```
## Environment Events
Environment events are triggered for organization-level changes.
### environment:autobuild-created
Triggered when an autobuild configuration is created for an environment
```typescript
{}
```
```typescript
{
environmentId: string
accountId: string
appId?: string
actorId: string
}
```
### environment:created
Triggered when a new environment is created
```typescript
{
environmentId: string
name: string
slug: string
isProd: boolean
}
```
### environment:updated
Triggered when an environment is modified
```typescript
{
environmentId: string
changes: Array<{
field: string
previousValue: any
newValue: any
}>
}
```
### environment:deleted
Triggered when an environment is deleted
```typescript
{
environmentId: string
deletedAt: string
deletedBy: string
}
```
## Action Events
Action events are triggered when custom actions are created, updated, or deleted.
### action:created
Triggered when a new custom action is created
```typescript
{
actionId: string
name: string
label: string
description?: string
type: string
}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId?: string
sheetId?: string
actorId: string
}
```
### action:updated
Triggered when a custom action is updated
```typescript
{
actionId: string
name: string
label: string
description?: string
type: string
changes: Array<{
field: string
previousValue: any
newValue: any
}>
}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId?: string
sheetId?: string
actorId: string
}
```
### action:deleted
Triggered when a custom action is deleted
```typescript
{
actionId: string
name: string
}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId?: string
sheetId?: string
actorId: string
}
```
## Document Events
Document events are triggered when documents are created, updated, or deleted within workbooks.
### document:created
Triggered when a document is created on a workbook
```typescript
{
actorId: string
spaceId: string
accountId: string
documentId: string
environmentId: string
}
```
### document:updated
Triggered when a document is updated on a workbook
```typescript
{
actorId: string
spaceId: string
accountId: string
documentId: string
environmentId: string
}
```
### document:deleted
Triggered when a document is deleted on a workbook
```typescript
{
actorId: string
spaceId: string
accountId: string
documentId: string
environmentId: string
}
```
## Commit Events
Commit events are triggered when data changes are made to records.
### commit:created
Triggered when a cell in a record is created or updated
```typescript
{
sheetId: string
workbookId: string
versionId: string
sheetSlug: string
}
```
### commit:updated
Triggered when commit metadata or details are updated
```typescript
{
sheetId: string
workbookId: string
commitId: string
versionId: string
changes: Array<{
field: string
previousValue: any
newValue: any
}>
}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
workbookId: string
sheetId: string
actorId: string
}
```
### commit:completed
Triggered when a commit has completed (only when trackChanges is enabled)
```typescript
{
sheetId: string
workbookId: string
versionId: string
commitId: string
}
```
### layer:created
Triggered when a new layer is created within a commit
```typescript
{
sheetId: string
workbookId: string
layerId: string
commitId: string
}
```
## Snapshot Events
Snapshot events are triggered when snapshots of sheet data are created.
### snapshot:created
Triggered when a snapshot is created of a sheet
```typescript
{
snapshotId: string
sheetId: string
workbookId: string
spaceId: string
}
```
## Agent Events
Agent events are triggered when agents are created, updated, or deleted.
### agent:created
Triggered when a new agent is deployed
```typescript
{
agentId: string
environmentId: string
}
```
### agent:updated
Triggered when an agent is updated
```typescript
{
agentId: string
environmentId: string
}
```
### agent:deleted
Triggered when an agent is deleted
```typescript
{
agentId: string
environmentId: string
}
```
## Secret Events
Secret events are triggered when secrets are managed.
### secret:created
Triggered when a new secret is created
```typescript
{
secretId: string
spaceId: string
environmentId: string
}
```
### secret:updated
Triggered when a secret is updated
```typescript
{
secretId: string
spaceId: string
environmentId: string
}
```
### secret:deleted
Triggered when a secret is deleted
```typescript
{
secretId: string
spaceId: string
environmentId: string
}
```
## Data Clip Events
Data clip events are triggered when data clips are managed.
### data-clip:collaborator-updated
Triggered when collaborators are added or removed from a data clip
```typescript
{
dataClipId: string
collaborators: string[]
changes: Array<{
action: "added" | "removed"
userId: string
}>
}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
dataClipId: string
}
```
### data-clip:created
Triggered when a new data clip is created
```typescript
{
dataClipId: string
accountId: string
status: string
}
```
### data-clip:updated
Triggered when a data clip's details are updated
```typescript
{
dataClipId: string
accountId: string
status: string
}
```
### data-clip:deleted
Triggered when a data clip is deleted
```typescript
{
dataClipId: string
accountId: string
status: string
}
```
### data-clip:resolutions-created
Triggered when new conflict resolutions are created for a data clip
```typescript
{
dataClipId: string
resolutions: Array<{
conflictId: string
resolution: any
resolvedBy: string
}>
}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
dataClipId: string
}
```
### data-clip:resolutions-refreshed
Triggered when conflict resolutions are refreshed or recalculated for a data clip
```typescript
{}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
dataClipId: string
}
```
### data-clip:resolutions-updated
Triggered when existing conflict resolutions are updated for a data clip
```typescript
{
dataClipId: string
resolutions: Array<{
conflictId: string
resolution: any
resolvedBy: string
updatedAt: string
}>
changes: Array<{
conflictId: string
previousResolution: any
newResolution: any
}>
}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
dataClipId: string
}
```
## Canvas Events
Canvas events are triggered when canvases are created, updated, or deleted.
### canvas:created
Triggered when a new canvas is created
```typescript
{
// Full canvas object with all properties
}
```
```typescript
{
canvasId: string
spaceId: string
environmentId: string
accountId: string
}
```
### canvas:updated
Triggered when a canvas is updated
```typescript
{
// Full canvas object with all properties
}
```
```typescript
{
canvasId: string
spaceId: string
environmentId: string
accountId: string
}
```
### canvas:deleted
Triggered when a canvas is deleted
```typescript
{
// Full canvas object with all properties
}
```
```typescript
{
canvasId: string
spaceId: string
environmentId: string
accountId: string
}
```
## Canvas Area Events
Canvas area events are triggered when canvas areas are created, updated, or deleted.
### canvas-area:created
Triggered when a new canvas area is created
```typescript
{
// Full canvas area object with all properties
}
```
```typescript
{
canvasAreaId: string
canvasId: string
}
```
### canvas-area:updated
Triggered when a canvas area is updated
```typescript
{
// Full canvas area object with all properties
}
```
```typescript
{
canvasAreaId: string
canvasId: string
}
```
### canvas-area:deleted
Triggered when a canvas area is deleted
```typescript
{
// Full canvas area object with all properties
}
```
```typescript
{
canvasAreaId: string
canvasId: string
}
```
## Thread Events
Thread events are triggered when AI conversation threads are created, updated, or deleted.
### thread:created
Triggered when a new AI conversation thread is created
```typescript
{
threadId: string
title?: string
status: string
createdAt: string
}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
threadId: string
actorId: string
}
```
### thread:updated
Triggered when an AI conversation thread is updated
```typescript
{
threadId: string
title?: string
status: string
changes: Array<{
field: string
previousValue: any
newValue: any
}>
}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
threadId: string
actorId: string
}
```
### thread:deleted
Triggered when an AI conversation thread is deleted
```typescript
{}
```
```typescript
{
accountId: string
environmentId: string
spaceId: string
threadId: string
actorId: string
}
```
## Cron Events
\*\* Deployed Agents Required \*\*
Cron events are only created for environments that have deployed agents subscribed to the specific cron topics. These events will not fire in localhost development environments unless you have deployed agents running in that environment.
Cron events are system events triggered at scheduled intervals for automated processes.
### cron:5-minutes
Triggered every 5 minutes for system maintenance and periodic tasks
```typescript
{}
```
```typescript
{
environmentId: string
}
```
### cron:hourly
Triggered every hour for scheduled maintenance and cleanup tasks
```typescript
{}
```
```typescript
{
environmentId: string
}
```
### cron:daily
Triggered once daily for daily maintenance and reporting tasks
```typescript
{}
```
```typescript
{
environmentId: string
}
```
### cron:weekly
Triggered once weekly for weekly cleanup and archival tasks
```typescript
{}
```
```typescript
{
environmentId: string
}
```
# Flatfile Query Language
Source: https://flatfile.com/docs/reference/ffql
Learn how to filter data in Sheets with FFQL.
FFQL (Flatfile Query Language) is Flatfile's custom query language used for filtering data in [Sheets](/core-concepts/blueprints). It's logical, composable, and human-readable.
For example, to find records where the first name is "Bender":
```
first_name eq Bender
```
## Syntax
The basic syntax of FFQL is:
```
[field name]
```
### Field Name
`field name` is optional and excluding a field name will search across all fields. For example: `eq "Planet Express"` will search across all fields for that value.
`field name` can be the field key or the field label.
Labels or values with spaces should be wrapped in quotes. For example: `name eq "Bender Rodriguez"`, or `"First Name" eq Bender`.
### Operators
FFQL operators are:
* `eq` - Equals (exact match)
* `ne` - Not Equal
* `lt` - Less Than
* `gt` - Greater Than
* `lte` - Less Than or Equal To
* `gte` - Greater Than or Equal To
* `like` - (Case Sensitive) Like
* `ilike` - (Case Insensitive) Like
* `contains` - Contains (for columns of type `string-list` and `enum-list`)
Both `like` and `ilike` support the following wildcards:
* `%` - Matches any number of characters
* `_` - Matches a single character
So, for instance, `like "Bender%"` would match "Bender Rodriguez" and "Bender Bending Rodriguez".
### Logical Operators
FFQL supports two logical operators:
* `and` - Returns rows meeting both conditions
* `or` - Returns rows meeting either conditions (inclusive)
### The `is` Operator
You can query for message statuses by using the `is` operator. For example: `is error` returns all the rows in an `error` state. `is valid` returns all the rows in a `valid` state. `first_name is error` returns the rows where First Name is in an `error` state.
### Escaping Quotes and Backslashes
When you need to include quotes or backslashes within quoted values, you can escape them using a backslash (`\`). Here are examples of how escaping works:
| Query | Value |
| ------------------ | --------------- |
| `"hello \" world"` | `hello " world` |
| `'hello \' world'` | `hello ' world` |
| `"hello world \\"` | `hello world \` |
| `"hello \ world"` | `hello \ world` |
| `"hello \\ world"` | `hello \ world` |
For example, to search for a field containing a quote:
```
first_name eq "John \"Johnny\" Doe"
```
This would match a first name value of: `John "Johnny" Doe`
***
## Constructing Queries
Complex queries are possible using a combination of operators:
```
(
email like "@gmail.com"
and (
"Subscription Status" eq "On-Hold"
or "Subscription Status" eq "Pending"
)
and login_attempts gte 5
)
or is warning
```
This query would return all the rows that:
1. Have a Gmail email address,
2. Have a Subscription Status of "On-Hold" or "Pending",
3. And have more than 5 login attempts.
It will also include any rows that have a "Warning" message status.
***
## Usage
### Via search bar
From the search bar in a Workbook, prepend **filter:** to your FFQL query.
**type in search bar**
```
filter: first_name eq Bender and last_name eq Rodriguez
```
### Via API
FFQL queries can be passed to any [REST API](https://reference.flatfile.com/overview/welcome) endpoint that supports the `q` parameter.
Here's an example **cURL** request using the `sheets//records` endpoint:
**Shell / cURL**
```bash
curl --location 'https://platform.flatfile.com/api/sheets/us_sh_12345678/records?q="Subscription Status" eq "On-Hold" or "Subscription Status" eq "Pending"' \
--header 'Accept: */*' \
--header 'Authorization: Bearer '
```
Make sure to [encode](https://en.wikipedia.org/wiki/URL_encoding) percent characters if you use them.
# For LLMs
Source: https://flatfile.com/docs/reference/for-llms
AI-optimized documentation formats and tools for better LLM integration
The Flatfile documentation is optimized for use with Large Language Models (LLMs) and AI tools. We provide several features to help you get faster, more accurate responses when using our documentation as context.
## Contextual Menu
We provide a contextual menu on every documentation page with quick access to AI-optimized content and direct integrations with popular AI tools:
* **Copy page** - Copies the current page as Markdown for pasting as context into AI tools
* **View as Markdown** - Opens the current page in a clean Markdown format
* **Open in ChatGPT** - Creates a ChatGPT conversation with the current page as context
* **Open in Claude** - Creates a Claude conversation with the current page as context
Access this menu by right-clicking on any page or using the contextual menu button (when available).
## LLM-Optimized File Formats
### /llms.txt
The `/llms.txt` file follows the [industry standard](https://llmstxt.org) that helps general-purpose LLMs index documentation more efficiently, similar to how a sitemap helps search engines.
**What it contains:**
* Complete list of all available pages in our documentation
* Page titles and URLs for easy navigation
* Structured format that AI tools can quickly parse
**How to use:**
* Access at: [https://flatfile.com/docs/llms.txt](https://flatfile.com/docs/llms.txt)
* Reference this file when asking AI tools to understand our documentation structure
* Use it to help LLMs find relevant content for specific topics
### /llms-full.txt
The `/llms-full.txt` file combines our entire documentation site into a single file, optimized for use as comprehensive context in AI tools.
**What it contains:**
* Full text content of all documentation pages
* Structured with clear page boundaries and headers
* Optimized formatting for LLM consumption
**How to use:**
* Access at: [https://flatfile.com/docs/llms-full.txt](https://flatfile.com/docs/llms-full.txt)
* Download and use as context for comprehensive questions about Flatfile
* Ideal for complex queries that might span multiple documentation areas
**Note:** This file is large and may consume significant tokens. You might want to use `/llms.txt` first to identify specific pages, then use individual page URLs or the contextual menu for targeted questions.
## Markdown Versions of Pages
All documentation pages are available in Markdown format, which provides structured text that AI tools can process more efficiently than HTML.
### .md Extension
Add `.md` to any page's URL to view the Markdown version:
```
https://flatfile.com/docs/getting-started/welcome.md
https://flatfile.com/docs/embedding/overview.md
https://flatfile.com/docs/core-concepts/workbooks.md
```
## Best Practices for AI Tool Usage
### For Specific Questions
1. **Start with targeted pages** - Use the contextual menu or `.md` extension for specific topics
2. **Reference multiple related pages** - Copy 2-3 relevant pages for comprehensive context
3. **Include the page URL** - Help the AI tool understand the source and context
### For Comprehensive Questions
1. **Use `/llms.txt` first** - Help the AI understand our documentation structure
2. **Follow up with specific pages** - Use targeted content based on the structure overview
3. **Consider `/llms-full.txt`** - For complex questions spanning multiple areas (token usage permitting)
### Example Prompts
**For specific integration help:**
```
I'm trying to embed Flatfile in my React app. Here's the relevant documentation:
[paste content from /embedding/react.md]
How do I configure it for my use case where...
```
**For comprehensive understanding:**
```
I need to understand Flatfile's architecture. Here's their documentation structure:
[paste content from /llms.txt]
Can you explain how Environments, Apps, and Spaces work together?
```
## Technical Implementation
These AI-optimization features are built into our documentation platform and automatically maintained:
* **Auto-generated** - Files are updated automatically when documentation changes
* **Optimized formatting** - Content is structured for optimal LLM processing
* **Consistent structure** - All pages follow the same format for predictable parsing
## Feedback
These AI optimization features are continuously improved based on usage patterns and feedback. If you have suggestions for better LLM integration or notice issues with the generated formats, please [join our Slack community](https://flatfile.com/join-slack/) and share your thoughts in the #docs channel.
# Icon Reference
Source: https://flatfile.com/docs/reference/icons
Reference of all available icons
Icons may be set for custom action buttons. Here's a list of all available icons that Flatfile supports.
## Usage Example
Icons can be used in [Actions](/core-concepts/actions) to provide visual cues for different operations:
```javascript
const action = {
operation: "export-data",
label: "Export",
icon: "download",
description: "Export your data",
mode: "background",
};
```
Icons help users quickly identify the purpose of different actions and improve the overall user experience in your Flatfile implementation.
Icons may be set for custom action buttons. Here's a list of all available icons
that Flatfile supports.
## All icons
# V2 Records API
Source: https://flatfile.com/docs/reference/v2-records-api
Using the V2 Records API with the Flatfile TypeScript API package
The V2 Records API provides an interface for reading and writing records from Flatfile sheets.
It is optimized for performance when working with large numbers of records.
Support for the V2 Records API was added to the `@flatfile/api` package in version 1.18.0.
## Overview
The V2 Records API is accessible through the `records.v2` property on the Flatfile client.
```typescript
import { FlatfileClient } from "@flatfile/api";
const client = new FlatfileClient({ token: "YOUR_TOKEN" });
await client.records.v2.get("us_sh_1234abcd")
```
The V2 API uses a specialized JSON Lines (JSONL) format that is optimized for performance and
usage with streaming requests.
## Reading Records
### getRaw()
Retrieve records from a sheet in raw JSONL format.
```typescript
getRaw(
sheetId: Flatfile.SheetId,
options?: GetRecordsRequestOptions,
requestOptions?: RequestOptions
): Promise
```
Returns an array of `JsonlRecord` objects containing the raw data structure from the API including special fields like `__k` (record ID), `__v` (version), etc.
**Example:**
```typescript
const rawRecords = await api.records.v2.getRaw('us_sh_123', {
fields: ['firstName', 'lastName'],
pageSize: 1000
});
rawRecords.forEach(record => {
console.log(`Record ID: ${record.__k}`);
console.log(`Field values:`, record);
});
```
### getRawStreaming()
Stream records from a sheet in raw JSONL format.
```typescript
getRawStreaming(
sheetId: Flatfile.SheetId,
options?: GetRecordsRequestOptions,
requestOptions?: RequestOptions
): AsyncGenerator
```
The most memory-efficient way to process large datasets as records are yielded individually.
**Example:**
```typescript
for await (const rawRecord of api.records.v2.getRawStreaming('us_sh_123', {
includeTimestamps: true
})) {
console.log(`Record ID: ${rawRecord.__k}`);
console.log(`Updated at: ${rawRecord.__u}`);
// Process each record as it streams in
}
```
## Writing Records
### writeRaw()
Write records to a sheet in raw JSONL format.
```typescript
writeRaw(
records: JsonlRecord[],
options?: WriteRecordsRequestOptions,
requestOptions?: RequestOptions
): Promise
```
**Parameters:**
* `records` - Array of JsonlRecord objects to write
* `options` - Write configuration options
* `requestOptions` - Optional request configuration
**Example:**
```typescript
const records: JsonlRecord[] = [
{ firstName: 'John', lastName: 'Doe', __s: 'us_sh_123' },
{ __k: 'us_rc_456', firstName: 'Jane', lastName: 'Smith' } // Update existing
];
const result = await api.records.v2.writeRaw(records, {
sheetId: 'us_sh_123',
truncate: false
});
console.log(`Created: ${result.created}, Updated: ${result.updated}`);
```
### writeRawStreaming()
Stream records to a sheet in raw JSONL format.
```typescript
writeRawStreaming(
recordsStream: AsyncIterable,
options?: WriteStreamingOptions,
requestOptions?: RequestOptions
): Promise
```
Efficiently write large datasets by streaming records to the server.
**Example:**
```typescript
async function* generateRecords() {
for (let i = 0; i < 100000; i++) {
yield {
firstName: `User${i}`,
email: `user${i}@example.com`,
__s: 'us_sh_123'
};
}
}
const result = await api.records.v2.writeRawStreaming(generateRecords(), {
sheetId: 'us_sh_123',
chunkSize: 1000,
useBodyStreaming: 'auto'
});
```
## JSONL Record Format Reference
The V2 Records API uses JSONL (JSON Lines) format for raw record operations. JSONL is a text format where each line is a valid JSON object representing a single record.
### Special Fields
Flatfile uses special fields prefixed with `__` (double underscore) to store metadata and system information:
| Field | Description | Example |
| ------ | ----------------------------- | ------------------------------------------------------------- |
| `__k` | Record ID (unique identifier) | `"us_rc_123456"` |
| `__nk` | New record ID (for creation) | `"temp_123"` |
| `__v` | Record version ID | `"v_789"` |
| `__s` | Sheet ID | `"us_sh_123"` |
| `__n` | Sheet slug | `"contacts"` |
| `__c` | Record configuration | `{"locked": true}` |
| `__m` | Record metadata | `{"source": "import"}` |
| `__i` | Validation messages | `{"email": [{"type": "error", "message": "Invalid format"}]}` |
| `__d` | Deleted flag | `true` |
| `__e` | Valid flag | `false` |
| `__l` | Linked records | `[{...}]` |
| `__u` | Updated timestamp | `"2023-11-20T16:59:40.286Z"` |
Note that many special fields will not be included unless the corresponding query parameter is included,
such as `includeMessages` to include the `__i` field with validation messages.
### Field Values
Regular field values are stored directly as properties on the JSONL record:
```json
{
"__k": "us_rc_123456",
"__s": "us_sh_123",
"firstName": "John",
"lastName": "Doe",
"email": "john.doe@example.com",
"age": 30,
"isActive": true
}
```
### Validation Messages
If included, the `__i` field contains validation messages for fields that have errors or warnings.
The messages are structured as an array of objects. Each validation message object will be structured
as `{"x": , "m": }`.
```json
{
"__k": "us_rc_123456",
"__s": "us_sh_123",
"email": "invalid-email",
"__i": [
{"x": "firstName", "m": "First name is required"},
{"x": "email", "m": "Not a valid email address"}
]
}
```
### Complete Example
A complete JSONL record with all common fields:
```json
{
"__k": "us_rc_123456",
"__v": "v_789",
"__s": "us_sh_123",
"__n": "contacts",
"__c": {
"locked": false,
"priority": "high"
},
"__m": {
"source": "api_import",
"batch_id": "batch_001"
},
"__i": [
{"x": "firstName", "m": "First name is required"},
{"x": "email", "m": "Not a valid email address"}
],
"__e": true,
"__u": "2023-11-20T16:59:40.286Z",
"firstName": "John",
"lastName": "Doe",
"email": "john.doe@example.com",
"company": "Acme Corp",
"phone": "+1-555-123-4567"
}
```
### Record Operations
The V2 write records endpoint can be used to create, update, or delete records into a sheet
all in a single request. The operation performed depends on which special keys are provided.
#### Create New Record
To create a new record, the record can simply be passed with a sheet ID in `__s` and no record ID.
```json
{
"__s": "us_sh_123",
"firstName": "New",
"lastName": "User"
}
```
#### Update Existing Record
To update an existing record, provide the record ID in the `__k` field.
```json
{
"__k": "us_rc_123456",
"__s": "us_sh_123",
"firstName": "Updated",
"lastName": "Name"
}
```
#### Delete Record
To delete a record, provide the record ID in the `__k` field and pass the delete flag
in the `__d` field.
```json
{
"__k": "us_rc_123456",
"__s": "us_sh_123",
"__d": true
}
```