Skip to main content
Version: v3

Managing data acceptance

After finishing this guide, you will be able to specify how and when you want to receive data.

Highlights:
  • Define your Portal options for user info and experience
  • Set up callbacks for managing data egress and validation
  • Specify how and when you want to receive data once an import is complete

New

Some of the features described below are only available in the most recent version of the Flatfile SDK. Make sure you update to the latest version.

Processing Data

When a user completes their import in Flatfile, we begin sending the validated data to you for acceptance. Data sent will have passed all validation rules configured in Flatfile and will not include data dismissed during submission. If you expect data to fail additional validation on your server, configure returning these rejected records to Flatfile using the onData callback.

Below is an example of the callback we will send. You can customize any of the options shown below the sample callback.

Flatfile.requestDataFromUser({
onData(chunk, next) {
// Do something...
chunk.records.forEach(console.log);
next();
},
onComplete() {
alert("thank you for importing data");
},
onError(error) {},
});

Options

PropertyTypeDescription
tokenOptional, JWT or CallbackPass a signed JWT or callback that returns a Promise with a JWT (eg. fetch from API). See Securing Data for more info.
embedIdOptional, UUIDIf you're in development mode, you can pass your EmbedID without any token.
userOptional, UserRead more below
orgOptional, OrgRead more below
chunkSizenumber, default: 100The number of records to process at once. Max 1000
chunkTimeoutnumber (ms), default: 3000Allowed processing time per chunk before erroring.
autoContinueboolean, default: falseWhen true, automatically continue the most recent in progress import. When false, users are given the choice to continue or start over.
customFieldsobjectYou can provide user-defined custom fields at runtime (string only)
open"iframe" or "window", default: "iframe"Whether to open the importer in a new window or an iFrame overlay in the existing page.
mountUrlOptional, URLUsed only for on-prem implementations.
apiUrlOptional, URLUsed only for on-prem implementations.

Types

User

PropertyTypeDescription
idstring or numberUnique identifier for the user
emailstringUser's email address
namestringUser's name

Org

PropertyTypeDescription
idstring or numberUnique identifier for the company
namestringCompany's name

Runtime-provided Custom Fields (Beta)

Using this feature you can easily pre-set custom fields that are not defined in the underlying template. This is helpful for user-defined fields that have already been setup in your application. You cannot define custom fields that have the same field key as a core template defined field.

danger

Currently only string custom fields are supported. Use data hooks for more advanced validation or formatting.

{
customFields: [
{
field: "foo",
type: "string",
label: "Custom Field Foo",
},
{
field: "bar",
type: "string",
label: "Custom Field Bar",
},
];
}

Callbacks

token

If you pass a callback function, the SDK will execute this when it needs to obtain a token (usually upon the user clicking a button in your application to start importing data).

info

Learn more about how tokens are used in the ⚔️ Securing Data documentation.

{
token: async (): Promise<string> => {
// customize this to match your response payload
const res = await fetch("https://your-api/auth-flatfile");
const data = res.json();
return data.jwt;
};
}

onInit

This is invoked whenever a new import batch is initialized. There are two places this can happen:

  1. When a user initially begins an import session
  2. If the user decides to cancel or start over

If you are doing anything that references the specific batch, workspace, or workbook that is being used for the import, then you must put that logic here. Particularly this is the only place you should set environment variables which are attached to each import session independently.

{
onInit({session, meta}) {
// Meta contains: {
// batchId, workspaceId, workbookId, schemaIds, mountUrl
// }

// example usage:
session.updateEnvironment({ foo: 'bar' })
}
}

onData

When a user completes an import we'll begin streaming this data to you in chunks. Generally, you will post these records to an API on your server and confirm they're accepted. If you anticipate records being rejected by your server, you can configure passing rejected records back to Flatfile as a PartialRejection in this section.

info

View RecordsChunk documentation below for more information on the payload.

{
onData: async (chunk, next) => {
const records = chunk.records;
await doSomeApiWork(records);
next();
};
}

onComplete

This is called after all the data is accepted or dismissed. You can do things like display a celebratory message or route your user to the next step of your workflow here.

tip

This is only called one time per import session. If you return a PartialRejection in the onData response this will not be called until the user re-submits the remaining data successfully.

{
onComplete() {
// do celebratory things
}
}

onError

You can build your own custom error handler to make sure users see errors in your applications' preferred UI. If you do not provide anything here, Flatfile will trigger a basic alert with the error.

info

View Error Handling documentation for more information on errors.

{
onError(err) {
// err.userMessage - user friendly message
// err.debug - an error message only intended for you as a developer
// err.code - a short code such as "FF-UA-01" that we recommend
// always including in messages to your user to expedite debugging
}
}

RecordsChunk

When processing data, Flatfile allows you to manage how much data you want to process at once by configuring chunkSize. Each chunk inside onData contains the following properties:

PropertyTypeDescription
recordIds[number, ...]List of Record IDs included in this chunk of records.
records[FlatfileRecord, ...]An array of records (see FlatfileRecord)
totalChunksnumberThe number of chunks to process in total.
currentChunkIndexnumberThe current index (starting at 0) of the chunk being processed.

FlatfileRecord

PropertyTypeDescription
data{ key: "value" }An object of record's cells e.g. { name: 'John', age: 22 }
recordIdnumberUnique identifier for the record provided by Flatfile
validbooleanDetermines whether or not the record is valid based on the record type, uniqueness and additional validations.
status"review", "dismissed" or "accepted"Specifies the record's status.
allMessages[RecordMessage, ...]An array of messages (see RecordMessage)
errors[RecordMessage, ...]An array of messages with "error" level (see RecordMessage)
warnings[RecordMessage, ...]An array of messages with "warn" level (see RecordMessage)
info[RecordMessage, ...]An array of messages with "info" level (see RecordMessage)

RecordMessage

PropertyTypeDescription
level"info", "warn" or "error"Determines the level of the message per record. "error" messages will automatically set valid to false
fieldstringField's key
messagestringText message

Rejecting Records

If you find additional validation errors when accepting incoming data, you can choose to reject some records and have the user resolve them within the Flatfile importer. When configured, your importers will be cycled back into Review until all records are accepted or dismissed.

Flatfile Submit Workflow

import { PartialRejection, RecordError } from "@flatfile/sdk";

Flatfile.requestDataFromUser({
onData: (chunk, next) => {
// Do something that causes a failure...
chunk.records.forEach(console.log);
next(
// A PartialRejection could be created with a list or a single RecordError.
new PartialRejection(
// A RecordError should be created with an record (or record id)
// and a list of validation errors.
new RecordError(1, [{ field: "full_name", message: "This person already exists." }]),
),
);
},
});

Up Next

Securing data