Skip to main content

API Integration

Background

This document outlines how to implement an API integration with Incode for ID validation, liveness, and face recognition. The target audience is developers who will be doing the implementation

Overview

API integration Steps diagram

First, create a session and get a session token (step 1). This token will be used in the subsequent API calls in the X - Incode - Hardware - Id header. Next, upload the front and back (if applicable) images of the ID (steps 2 and 3). These endpoints will return some attributes about the ID. Refer to the API documentation. After that, start the ID processing job (step 4). This asynchronous endpoint kicks off a job that runs the ID validation tests and processes the ID through Incode’s ML models. This should be done as soon as the ID images are uploaded because running the job can take up to a few seconds. Next, upload the selfie (step 5) and perform the face recognition (step 6). Wait until post processing is finished (step 7). This step waits for the processing started in step 4 to complete. There are two ways to do this. The first way is to poll the fetch onboarding status endpoint until onboardingStatus is equal to ONBOARDING_FINISHED. The second way is to implement webhooks by listening for the ONBOARDING_FINISHED event. Once that is done, finish the session (step 8). This will mark the session as completed in the dashboard. This is optional and should only be called if the last step in your flow is not configurable in the dashboard (see note below). Finally, get the scores and OCR data ( step 9 and 10).

NOTE: It is important to configure Mark session completed on status in the flow configuration to the last step in your flow.

API Reference

The links below correspond to the API documentation for each step in the API Integration Steps diagram above.

  1. Start Onboarding

    • This API is used to create a new session (new onboarding process) at Incode Platform . It returns a token that references the session
    • The parameter countryCode normally has a value "ALL". However, if you are validating a Brazilian ID, it should be “BRA”.
    • There is an optional parameter configurationId takes in the ID of a Flow that you have configured on the dashboard. If this is not passed, it will be done on the default flow.
  2. Add front of Id

    • This takes in a parameter base64Image. This is the base64 encoded representation of the image.

    • Optional logic to be implemented client-side:

      • if (!classification) display error & repeat images capture
      • if (glare < 10) display error & repeat images capture
      • if (sharpness < 10) display error & repeat images capture
      • if (horizontalResolution < 155) display error & repeat images capture
      • else show success and proceed to take back side Id image
  3. Add back of ID

    • Same as above.
  4. Process ID

    • Asynchronous method that kicks off processing jobs for the ID that run ID validation tests and extract OCR. This can take several seconds
  5. Add Face

    • This takes in a parameter base64Image. This is the base64 encoded representation of the image.

    • Optional logic to be implemented client-side:

      • if (HTTP status != 200) display error & repeat image capture
      • if (confidence > 0) display error & repeat image capture
      • if (hasLenses) display error & repeat image capture
      • if (hasFaceMask) display error & repeat image capture
      • if (!isBright) display error & repeat image capture
      • else show success and proceed to Face Match
    • Error Codes:

      • 100: "Too dark"
      • 101: "Take off your glasses"
      • 4010: "Make sure you are the only one in the frame"
      • 4019: "Position your face in the center of the frame"
      • 1003: "Position your face in the center of the frame"
      • 3004: "Position your face in the center of the frame"
      • 3005: "Move closer so your face fits the frame"
      • 3006: "Photo blurry, clean the lens and hold still"
      • 3007: "Too dark, try moving elsewhere"
      • 3008: "Something went wrong, please try later"
      • 3009: "Something went wrong, please try later"
      • 3010: "Couldn't capture, try moving elsewhere"
      • default: "Poor conditions for selfie, try in another place"
  6. Process Face Recognition

    • This API runs the face match between the image on the ID and the uploaded face.
    • A confidence value of 0 means that face image does not match the image on the ID.
    • The existingUser parameter indicates if the user is already register in the system or not.
  7. Fetch Onboarding status OR Webhooks

    • Wait until onboarding is finished (results of step 2). This can be done by polling “Fetch Onboarding status” and checking that onboardingStatus equals ONBOARDING_FINISHED. Another option is enable webhooks and wait for the ONBOARDING_FINISHED event.
  8. Finish onboarding

    • This will mark the session as completed. This should only be used for non-standard flows where session completion criteria cannot be configured in the flow on the dashboard.
  9. Fetch Id validation, liveness, and face recognition scores

    • This will return a JSON object that contains the results of all the scoring. See above link for details.
    • The overall parameter indicates whether it should be a pass or a fail
  10. Fetch OCR results

    • This will return a JSON object with the results of OCR reading. See above link for details.

    • A detailed structure of the available field in the decrypted response body can be found in the Apiary documentation. But the most common objects used are the following,

      • name. Name of the person extracted from ID.
      • address and addressFields. The address extracted from ID.
      • addressFromStatement and addressFromStatementFields. This is the address extracted from Proof Of Address (PoA).
      • typeOfId. Is used to determine what kind of ID was uploaded.
      • documentType. Is used to determine what kind of proof of address was uploaded.

Secure Integration with Mobile SDKs

API integration Steps diagram

Steps 1-4

First, create an endpoint on the customer owned backend server. This endpoint will be called from the user's device at the beginning of onboarding. When this endpoint is called, it will trigger a call to Incode's API /omni/start, which will create a new session. The Incode start endpoint will return a token to the customer backend server. Then, that token should be sent back to the user's device as a response. Security is enhanced because the API key will live on the customer's backend server and will not be exposed publicly to users. Another advantage that increases security is tokens, by default, are valid for 90 days (can be modified).

Attention: Web SDK Users

It is important that the backend returns the token in a JSON object to use it with the SDK. The object should have the following structure.

{
"token": "eyJhbGciOiJIUzI1NiJ..."
}

If the backend does not return the token as an object, but rather a string, then it must be wrapped in an object within the implementation code.

Step 5

Initialize the Incode SDK using the token received from the above steps. It is VERY IMPORTANT to add 0/ to the end of the API URL. This endpoint allows for API requests that don't contain the API key and will work with just the token. For example:

  1. https://demo-api.incodesmile.com/0/
  2. https://saas-api.incodesmile.com/0/

Steps 6-11

Once the SDK is initalized with the token, onboarding can occur. Typically, an onboarding has the following steps:

  1. ID capture (front and back of ID) - step 6 and 7
  2. Process ID (run the ID through incode models and tests) - step 8
  3. Selfie capture (liveness) - step 9
  4. Process face (face recogition) - step 10
  5. Mark onboarding as finished - step 11

Please review the SDK specific guides on the left for implementation details.

Step 12

Listen for completion of the session by polling fetch onboarding status until onboardingStatus equals ONBOARDING_FINISHED. Another option is to use webhooks and wait for when onboardingStatus equals ONBOARDING_FINISHED. Once the onboarding is completed, then the results are ready to be consumed by the customer backend using the fetch ID validation, liveness, and face recognition scores and fetch OCR results API endpoints.