react-native-incode-sdk Usage guide
Contents:
- Installation
- Additional setup for iOS
- Additional setup for Android
- Usage
- Flow execution results
- Modules configuration
- Flow listeners
- Other API methods
- Sections execution API
- Dynamic Delivery
- Using SDK without an API KEY
- Customization
Installation
npm install https://sdk-js.s3.amazonaws.com/react-native/react-native-incode-sdk-3.2.0-nvc.tgz
Additional setup for iOS
After installation, it's necessary to do the linking for the iOS, after running the command above. Please make sure you have Cocoapods version 1.11 or above.
- Change your Podfile so it equires deployment target 11 or higher.
-platform :ios, '10.0'
+platform :ios, '11.0'
- Add this line to the top of your Podfile:
pod 'react-native-incode-sdk', :path => '../node_modules/react-native-incode-sdk/ios'
Add an empty Swift file (e.g. Void.swift) through xCode to your native project. When prompted for a bridging header, choose Create Bridging header.
Relying on React Native's auto-linking, run the following:
cd ios && pod install cd ..
- Adapt
Info.plist
by adding:
NSCameraUsageDescription
. The SDK uses the camera in order to verify the identity of the customer, e.g. in ID scan, Selfie scan and so on.NSLocationWhenInUseUsageDescription
. The SDK uses the current user location in order to detect exact location for Geolocation step.NSMicrophoneUsageDescription
. The SDK uses microphone for performing a video call during Video Conference step or for doing speech recognition during Video Selfie.
- Add Linker flags
Go to your projects Target, then Build Settings -> Other Linker Flags and add these flags:
$(inherited) -all_load -ObjC -l “stdc++” -l “iconv” -framework “VideoToolbox”
Additional setup for Android
In your app's build.gradle
please make sure that following settings are set:
Change your
minSdkVersion
to 17 or higher.Add this dependency in you app's
build.gradle
:
dependencies {
+ implementation 'com.incode.sdk:core-light:2.0.0'
}
- Application has multidex support enabled:
defaultConfig {
...
+ multiDexEnabled true
...
}
dependencies {
...
+ implementation 'com.android.support:multidex:1.0.3'
...
}
Your Application
class should now be able to extend MultiDexApplication
:
+ import androidx.multidex.MultiDexApplication;
+ public class MyApplication extends MultiDexApplication implements ReactApplication {
...
}
- Modify your project's
build.gradle
so it contains Artifactory username and password, provided by Incode.
allprojects {
repositories {
...
+ jcenter()
+ maven {
+ url "http://repo.incode.com/artifactory/libs-incode-welcome"
+ credentials {
+ username = "ARTIFACTORY_USERNAME"
+ password = "ARTIFACTORY_PASSWORD"
+ }
+ }
...
}
}
Usage
The typical basis workflow includes the following steps:
- Explicitly initialize IncodeSdk
- After waiting for the SDK to be initialized, configure Incode Onboarding Flow, by selecting modules (steps) you need
- Register listeners for steps-related events (e.g. step completion, step update, step failure)
- Run the Onboarding flow
Usage Example
In order to use Incode SDK, it is necessary to provide the API key and URL you're given by Incode (see below).
import React from 'react';
import IncodeSdk from 'react-native-incode-sdk';
import { View, TouchableOpacity } from 'react-native';
const StartOnboardingButton = () => {
// 1. initialization
const initializeAndRunOnboarding = () => {
IncodeSdk.initialize({
testMode: false,
apiConfig: {
key: 'YOUR API KEY HERE',
url: 'YOUR API URL HERE',
},
})
.then((_) => {
// 2. configure and start onboarding
startOnboarding();
})
.catch((e) => console.error('Incode SDK failed init', e));
};
const startOnboarding = () =>
IncodeSdk.startOnboarding({
config: [
{ module: 'IdScan' },
{ module: 'SelfieScan' },
{ module: 'FaceMatch' },
{ module: 'UserScore', mode: 'accurate' },
],
})
.then((result) => {
console.log(result);
})
.catch((error) => console.log(error));
// 3: register listeners for relevant events when the component gets mounted
React.useEffect(() => {
const unsubscribers = setupListeners(); //defined below
// return a function unregistering all your listeners (called when component is unmounted)
return () => {
unsubscribers.forEach((unsubscriber) => unsubscriber());
};
}, []);
return <TouchableOpacity label="Start onboarding!" onPress={initializeAndRunOnboarding} />;
};
const setupListeners = () => {
// returns a callback to unregister your listener, e.g. when your screen is getting unmounted
const complete = IncodeSdk.onStepCompleted;
return [
complete({
module: 'IdScanFront',
listener: (e) => {
console.log('ID scan front:', e.result);
},
}),
complete({
module: 'IdScanBack',
listener: (e) => {
console.log('ID scan back: ', e.result);
},
}),
complete({
module: 'ProcessId',
listener: (e) => {
console.log('ProcessId result: ', e.result.extendedOcrData);
},
}),
complete({
module: 'SelfieScan',
listener: (e) => {
console.log('Selfie scan complete', e.result);
},
}),
complete({
module: 'FaceMatch',
listener: (e) => {
console.log('Face match complete', e.result);
},
}),
complete({
module: 'UserScore',
listener: (e) => {
console.log('User Score', e.result);
},
}),
];
};
Initialize the SDK
Parameters available for IncodeSDK.initialize
method:
testMode
- Set to true if running on simulator/emulators, false when running on devices. Default false.sdkMode
- Specifystandard
orcaptureOnly
modes. Default isstandard
.captureOnly
mode works completely offline and for ID and Selfie scan only does image captures without doing any validations.disableHookCheck
- Set to true if you would like to test the app on Android devices that have some hook frameworks installed, cause they are not allowed in production. Default false.apiConfig
key
[optional] - API key, provided by Incodeurl
- API url, provided by Incode
Possible error codes and error messages during initialize
:
Incd::EmulatorDetected
- “Emulator detected, emulators aren't supported in non test mode!”
Incd::RootDetected
- “Root access detected, rooted devices aren't supported in non test mode!”
Incd::HookDetected
- “Hooking framework detected, devices with hooking frameworks aren't supported in non test mode!”
Incd::TestModeDetected
- “Please disable test mode before deploying to a real device!”
Starting the onboarding
IncodeSdk.startOnboarding
will create a new session based on specified list of modules that you provide via config
.
In case you would like to cahnge session parameters, specify them inside optional sessionConfig
parameter:
region
- Specify a region, currently supported areALL
(covers all regions),BR
(optimized for Brazil) andIN
(optimized for India)queue
- Specify a queue to which this onboarding session will be attached tointerviewId
- In case you're creating an onboarding session on the backend, you can provide its interviewId and the flow will be started for that particular onboarding sessionconfigurationId
- In case you've created a flow on the dashboard with a specific configuration, you can provide it here so that those settings apply to this sessiontoken
- In case you're creating an onboarding session on the backend, you can provide its token and the flow will be started for that particular onboarding sessionexternalId
- ID used outside of Incode OmnicustomFields
- Custom data that should be attached to this onboarding session
Supported modules
The modules supported by the SDK are listed here, and elaborated in more detail throughout the rest of this document.
Phone
DocumentScan
Geolocation
UserConsent
Signature
VideoSelfie
- ⚠️ Android note: Available only for API level 21+.
IdScan
(does capture of both front and back ID and validates it)IdScanFront
(does capture of ony front)IdScanBack
(does capture of only back)ProcessId
(does only validation of the ID - should be called onceIdScanFront
andIdScanBack
are completed)Conference
SelfieScan
FaceMatch
QrScan
Approve
UserScore
Modules interdependencies
The order of the modules in the config array indicates the execution order. Note that:
FaceMatch
module expectsIdScan
andSelfieScan
to have executed, in order to perform the match. In other words,IdScan
andSelfieScan
must precedeFaceMatch
ProcessId
module expectsIdScanFront
andIdScanBack
to have executed, in order to validate the ID properly. In other words,IdScanFront
andIdScanBack
must precedeProcessId
UserScore
module should succeed all other modules (must be at the end)Approve
module should succeed all other modules (must be at the end)- The
UserScore
andApprove
modules do not depend on each other, so their order can be arbitrary.
Flow execution results
When the flow completes, the promise startOnboarding
will contain the execution status:
{ "status": "success" }
In case the user cancels the process, the status
will be user_cancelled
.
In case the execution can not be completed, the startOnboarding
promise will be rejected. This can happen:
- if the user cancels the process
- if the user fails to enter the correct captcha for 3 times (given the Captcha module is included)
Modules configuration
Common configuration needed for each module includes:
Parameter name | Function |
---|---|
moduleName | Mandatory. The name of the module we wish to include |
enabled | Optional: Whether or not the module is needed in the flow |
Below is the list of all available module names, with additional configuration parameters.
- Phone
- no additional parameters
- DocumentScan
showTutorial
(optional, defaults tofalse
): Show tutorial for document scan.showDocumentProviderScreen
(optional, defaults totrue
): Specifytrue
if you would like to show document provider screen to the user, orfalse
if you would like to go directly to the capture.type
- specify document typeaddressStatement
,paymentProof
,medicalDoc
,otherDocument1
,otherDocument2
,otherDocument3
.
- Geolocation
- no additional parameters
- UserConsent
title
: string,content
: string
- Signature
- no additional parameters
- VideoSelfie: Available only for API level 21+.
showTutorial
(optional, defaults tofalse
): Show tutorial for video selfie.selfieScanMode
(optional, defaults toselfieMatch
):selfieMatch
|faceMatch
; Specify if you would like to do selfie comparison, or comparison with the photo from ID.selfieLivenessCheck
(optional, defaults tofalse
): Check for user liveness during video selfie.showIdScan
(optional, defaults totrue
): Ask for ID scan during video selfie.showDocumentScan
(optional, defaults totrue
): Ask for Proof of Address during video selfieshowVoiceConsent
(optional, defaults totrue
): Ask for Voice consent during video selfievoiceConsentQuestionsCount
(optional, defaults to3
): Choose number of questions for video selfie voice consent steps.
- IdScan
showTutorial
(optional, defaults totrue
): booleanidType
(optional):id
orpassport
. If omitted, the app will display a chooser to the user.
- IdScanFront
showTutorial
(optional, defaults totrue
): booleanidType
(optional):id
orpassport
. If omitted, the app will display a chooser to the user.
- IdScanBack
showTutorial
(optional, defaults totrue
): boolean
- ProcessId
- no additional parameters
- Conference
disableMicOnStart
: boolean
- SelfieScan
showTutorial
(optional, defaults totrue
): boolean
- FaceMatch
- no additional parameters
- Note: has to be specified after IDScan and SelfieScan modules.
- QrScan
- no additional parameters
- Approve
forceApproval
(optional, defaults tofalse
): boolean - iftrue
the user will be force-approved
- UserScore
mode
(optional, defaults toaccurate
):accurate
orfast
. Ifaccurate
the results will be fetched from server, which may exhibit some latency, but will rely on server-side processing of information, which may be more reliable than on-device processing. Iffast
, then results based on on-device processing will be shown.
Flow listeners
The SDK offers several types of listeners:
- user cancellation
- step completion, for each module individually
- step error, for each module individually
- step update, for each module individually
User cancellation
If the user cancels the onboarding, startOnboarding
method result would have status
- 'userCancelled'.
On Android, the user can cancel by pressing the back key. On iOS the user can not cancel.
Step completion
When a step within the configured Onboarding flow is completed, the information can be obtained by registering a listener via IncodeSdk.onStepCompleted
. This takes two arguments: (1) the module name and (2) the callback.
IncodeSdk.onStepCompleted({
module: 'SelfieScan',
listener: (selfieScanResult) => {
console.log('SELFIE SCAN completed', selfieScanResult.result);
},
})
IdScanFront
- image:
IncdImage
object, containingpngBase64
of the captured front ID - croppedFace:
IncdImage
object, containingpngBase64
of the cropped face from the front ID - classifiedIdType: the
String
- Classified ID type by Incode, ie.Voter Identification
- chosenIdType: the
String
- what did the user choose to captureid
orpassport
- status: the
String
. If value is other thanok
you can consider the ID scan and/or validation did not come through successfully. The Incode UI will clearly inform the user about these errors and will attempt the scan several times, before responding with an error. Possible error values:errorClassification
,noFacesFound
,errorGlare
,errorReadability
,errorSharpness
,errorTypeMismatch
,userCancelled
,unknownError
,errorClassification
,errorShadow
,errorPassportClassification
.
IdScanBack
- image:
IncdImage
object, containingpngBase64
of the captured front ID - classifiedIdType: the
String
- Classified ID type by Incode, ie.Voter Identification
- chosenIdType: the
String
- what did the user choose to captureid
orpassport
- status: the
String
. If value is other thanok
you can consider the ID scan and/or validation did not come through successfully. The Incode UI will clearly inform the user about these errors and will attempt the scan several times, before responding with an error. Possible error values:errorClassification
,noFacesFound
,errorGlare
,errorReadability
,errorSharpness
,errorTypeMismatch
,userCancelled
,unknownError
,errorClassification
,errorShadow
,errorPassportClassification
.
Notes:
result.data.birthDate
is the epoch time.result.status.back
andresult.status.front
can take one of following values:ok
or several others indicating an issue with the scanning. The Incode UI will clearly inform the user about these errors and will attempt the Scan several times, before responding with an error. Other errors are:errorClassification
,noFacesFound
,errorCropQuality
,errorGlare
,errorReadability
,errorSharpness
,errorTypeMismatch
,userCancelled
,unknownError
,errorClassification
,shadow
,errorAddress
,errorPassportClassification
,errorReadability
.
ProcessId
- data: the basic
IdScanOcrData
object, containing most important data - ocrData: the
String
object, raw JSON containing full OCR data ie.exteriorNumber
,interiorNumber
,typeOfId
,documentFrontSubtype
Notes:
result.data.birthDate
is the epoch time.
Conference
This module starts a video conference call with the executive.
Example response for completion of the step:
{
"step": "Conference",
"result": {
"status": "success"
}
}
Example response for error of the step:
{
"step": "Conference",
"result": {
"status": "error"
}
}
The field result.status
can have values userCancelled
, invalidSession
or error
;
DocumentScan
The DocumentScan will attempt reading address from the provided document.
Note: If the user decides to skip this step, you will get a response with blank values, instead of nil/error.
Example response for completion of the step:
{
"result": {
"address": { // parsed OCR data
"city": "Springfield",
"colony": "Springfield",
"postalCode": "555555",
"state": "Serbia",
"street": "Evergreen terrace"
},
"data": {"raw JSON OCR data"},
"type": "addressStatement",
"image": {"pngBase64": "/9j/4AAQSkZJRgABAQAAAQABAAD/4..."},
},
"step": "DocumentScan"
}
Example response for error of the step:
{
"status": "userCancelled",
"step": "DocumentScan"
}
The field status
can have one of the following values: permissionsDenied
, simulatorDetected
or unknown
.
Geolocation
The Geolocation steps yields the current location of the user. Example response for completion of the step:
{
"result": {
"city": "Springfield",
"colony": "Springfield",
"postalCode": "555555",
"state": "Serbia",
"street": "Evergreen terrace"
},
"step": "Geolocation"
}
The field status
can have one of the following values: permissionsDenied
, unknownError
or noLocationExtracted
.
UserConsent
In case the user has given the consent:
{
"status": "success",
"step": "UserConsent"
}
If the user declined to give the consent, the onboarding flow will be ended with a status userCancelled
.
FaceMatch
This module checks if the face from the scanned ID/passport and the face obtained from the SelfieScan module are a match.
Example response for completion of the step:
{ "result": { "status": "match",
"confidence": 1,
"existingUser": false
},
"step": "FaceMatch" }
The field result.status
can take values match
or mismatch
.
Example response for error of the step:
{
"status": "userCancelled",
"step": "FaceMatch"
}
The field status
can have one of values: unknownError
or simulatorDetected
.
Phone
This module enables the user to enter their phone number. Example response for completion of the step:
{
"result": { "phone": "+15555555555", "resultCode": "success" },
"step": "Phone"
}
The field result.resultCode
can take the value success
.
Example response for error of the step:
{
"status": "error",
"step": "Phone"
}
The field status
can take one of the values: invalidSession
, userCancelled
, error
.
Signature
This module enables user to add a digital signature by signing on a canvas.
Example response for completion of the step:
{ "result": { "status": "success" }, "step": "Signature" }
Example response for error of the step:
{ "status": "error", "step": "Signature" }
The field status
can take one of following: error
or invalidSession
.
SelfieScan
Example result:
{
"result": { "spoofAttempt": false, "status": "success" },
"step": "SelfieScan",
"image": {
"data": "...PNG Base 64 encoded"
}
}
Example response for error of the step:
{
"status": "success",
"step": "SelfieScan"
}
The field status
can take one of the values: none
, permissionsDenied
, simulatorDetected
, spoofDetected
.
Approve
Adding this module to the flow instructs the Incode server to perform the approval of the user.
Example response for completion of the module:
{ "result": { "status": "approved", "id": "customerUUID", "customerToken": "customerToken"}}, "step": "Approve" }
- The field
status
can take values:approved
andfailed
.
Example response for error of the module: there is no error.
UserScore
This module displays the user score using the UI.
Example response for completion of the module:
{
"result": {
"existingUser": true,
"facialRecognitionScore": "0.0/100",
"idVerificationScore": "79.0/100",
"livenessOverallScore": "95.2/100",
"overallScore": "0.0/100",
"status": "fail"
},
"step": "UserScore"
}
This module shows the summary of the onboarding, based on all the modules you configured.
Example response for error of the module:
The field result.status
can have one of the following values: warning
, unknown
, manual
and fail
.
QrScan
This module captures the QR code on the back of the ID, extracts the valuable data out of it and sends it to the server.
VideoSelfie
This module records the device's screen while the user needs to do a selfie, show his ID, answer a couple of questions and confirms that he accept the terms and conditions. The recorded video is then uploaded and stored for later usage.
Captcha
This module asks the user to enter OTP generated for current session. If the user doesn't enter correct OTP multiple times the flow will be terminated. On module completion, the result given to the listener will take the shape of:
{
"result": {
"status": "success",
"response": "ABCDEF"
}
}
Note: Captcha failure will result in rejection of Onboarding promise. There is no module error listener here.
Other API methods
Incode's SDK exposes additional operations: Face login, user approve and user score.
Face Login 1:1
Prerequisite for Face Login is that the user was approved during onboarding, and that you have at hand his customerToken
and customerUUID
. If the user was approved during onboarding on mobile device, you should have received customerToken
and customerUUID
as a result of Approve step during onboarding.
Example usage:
const runLogin = async () => {
await IncodeSdk.initialize({
testMode: false,
apiConfig: {
key: 'YOUR API KEY',
url: 'YOUR API URL',
},
});
IncodeSdk.startFaceLogin({
showTutorials: true,
customerToken: 'YOUR_CUSTOMER_TOKEN',
customerUUID: 'YOUR_CUSTOMER_UUID'
}
).then( (faceLoginResult) => {
console.log(faceLoginResult)
})
.catch((e) => {
console.error(e.code + " - " + e.message)
});
};
Face Login response will contain boolean entry faceMatched
, indicating whether or not the selfie scan resulted in successful match with the selfie obtained in the onboarding process.
{
"spoofAttempt": false,
"faceMatched": true
}
Get user score API
To fetch user score programmatically in order to evaluate it, you can use IncodeSdk.getUserScore
.
When the promise is fulfilled, the result will contain all the data like in the result field of the corresponding User Score step.
Configuraton is similar to User Score step: this method can receive optional config parameter map: {mode: 'accurate'}
. The mode
parameter is optional: can take values accurate
or fast
.
If config object or the mode
is absent, then accurate
is the default;
IncodeSdk.getUserScore().then((score) => console.log(score)); // defaults to accurate
IncodeSdk.getUserScore({ mode: 'accurate' }).then((score) =>
console.log(score)
);
IncodeSdk.getUserScore({ mode: 'fast' }).then((score) => console.log(score));
Approve API
To programmatically initiate approval of the user, you can use IncodeSdk.approve
. It takes a config object, setting forceApprove
(see approve module). When the promise is fulfilled, the result will contain all the data like in the result field of the corresponding Approve step.
IncodeSdk.appove({ forceApprove: false }).then((approval) =>
console.log(approval)
);
FaceMatch API
To programmatically initiate face match between photo from the ID and Selfie, you can use IncodeSdk.faceMatch
. It does the matching silently in the background, comapring to regular Face Match module that shows UI feedback to the user.
IncodeSdk.faceMatch()
.then((faceMatchResult) => {
console.log('Face match result: ', faceMatchResult);
})
.catch((e) => {
console.error('Face match failed', e);
})
Sections execution API
The Sections execution API enables you to design a flow which executes in sections, whereas the regular onboarding flow is executed in a single go. You'd typically use this method to evaluate results of a module (e.g. ID Scan) in order to choose whether or not other modules (e.g. Geolocation) are needed. In this case ID Scan would be executed in one segment, and Geolocation would be executed in the following, if you decide to, based oon the ID Scan results. In between segments, the Incode's UI goes away, and you get to update UI and/or progress into the next segment.
Here's an example:
import React from 'react';
import IncodeSdk from 'react-native-incode-sdk';
import { View, TouchableOpacity } from 'react-native';
const StartSegmentedOnboardingButton = () => {
const [addressCity, setAddressCity] = useState(null);
const executeSegmentedFlow = async () => {
await IncodeSdk.initialize({
testMode: false,
apiConfig: {
key: 'YOUR_API_KEY',
url: 'YOUR_URL',
},
});
const unregisterIdScanListener = IncodeSdk.onStepCompleted({
module: 'IdScan',
listener: (e) => {
setAddressCity(e.result.data.address.city);
},
});
const flow = await IncodeSdk.createOnboardingSession({});
const id = await IncodeSdk.startOnboardingSection({
config: [
{
module: 'IdScan',
},
],
});
await IncodeSdk.startOnboardingSection({
config: [{ module: 'Geolocation' }, { module: 'SelfieScan' }],
});
IncodeSdk.finishOnboardingFlow();
unregisterIdScanListener();
};
return <TouchableOpacity label="Start onboarding!" onPress={executeSegmentedFlow} />;
};
Using SDK without an API KEY
To use the SDK without an API KEY, please provide a token via setOnboardingSession
method, and afterwards you should proceed with using Sections API like in the example below:
await IncodeSdk.initialize({
testMode: false,
apiConfig: {
url: 'YOUR_URL'
},
});
IncodeSdk.setOnboardingSession({
sessionConfig: {
token: 'YOUR_TOKEN',
}
}).then((sessionResult) => {
IncodeSdk.startOnboardingSection({
config: [{ module: 'IdScan', showTutorial: false },
{ module: 'SelfieScan', showTutorial: false},],
})
.then((onboardingSectionResult) => {
IncodeSdk.faceMatch()
.then((faceMatchResult) => {
IncodeSdk.finishOnboardingFlow()
.then((finishResult) => {
console.log('Session completed successfully');
})
.catch((e) => {
console.error('Finish session error', e);
})
})
.catch((e) => {
console.error('Face match error', e);
})
})
.catch((e) => {
console.error('Start onboarding section error', e);
});
})
.catch((e) => {
console.error('Set onboarding session error', e);
});
Dynamic Delivery
To use Dynamic Delivery of the resources on Andorid and iOS platforms please follow the guide here
Customization
The customization is done independently for both platforms.
Android
- To change the texts, add other localization to the app, change icons or videos or customize the theme check the full guide here.
iOS
- To change icons or videos guide, please follow the guide here.
- To change the texts, add other localization to the app, please follow the guide here
- To change the theme use
setTheme
method, as in the following example:
const jsonTheme = JSON.stringify({
"buttons": {
"primary": {
"states": {
"normal": {
"cornerRadius": 12,
"textColor": "#ffffff",
"backgroundColor": "#ff0000"
},
"highlighted": {
"cornerRadius": 12,
"textColor": "#ffffff",
"backgroundColor": "#5b0000"
},
"disabled": {
"cornerRadius": 12,
"textColor": "#ffffff",
"backgroundColor": "#f7f7f7"
}
}
}
},
"labels": {
"title": {"textColor": "#20263d"},
"secondaryTitle": {"textColor": "#ffffff"},
"subtitle": {"textColor": "#20263d"},
"secondarySubtitle": {"textColor": "#ffffff"},
"smallSubtitle": {"textColor": "#20263d"},
"info": {"textColor": "#636570"},
"secondaryInfo": {"textColor": "#ffffff"},
"body": {"textColor": "#20263d"},
"secondaryBody": {"textColor": "#ffffff"},
"code": {"textColor": "#20263d"}
}
})
IncodeSdk.setTheme({jsonTheme: jsonTheme})
Dynamic Delivery Usage Guide
In case you want to optimize app's Download Size and reduce SDK's size impact, you can opt to download additional SDK resources at some point during execution of your app. Using APIs available in React Native SDK you'll be able to check if the resources are downloaded already, download them if they're not and remove them in case you no longer to use SDK features.
Check if the resources are downloaded
IncodeSdk.checkOnDemandResourcesDownloaded()
.then((result) => {
console.log('resourcesAvailable: ', result.status);
// if status is 'true' resources are avaiable, otherwise 'false' and they need to be downloaded
})
.catch((e) => {
//
});
Download resources
Download of the resources is done using following method:
IncodeSdk.downloadOnDemandResources()
.then((result) => {
console.log('downloadCompleted, status: ', result.status);
// if status is 'success' resources are downloaded, otherwise download has to be retried
})
.catch((e) => {
// Check for specific error values and try retrying the download
});
It downloads the resources silently in the background. To be notified about the download progress:
IncodeSdk.onResourceDownloadProgressUpdated((progress) => {
console.log(‘download progress updated: ’, progress); // progress value ranges from 0 - 1.
});
List of error codes for downloadOnDemandResources
method:
INTERNAL_ERROR
- unknown error, please contact Incode to resolve the issue and provide the error code.NETWORK_ERROR
- The download request failed because of a network error, retry again laterACCESS_DENIED
- The app is unable to register the request because of insufficient permissions, retry after user accepts the permissions
Remove resources
Once you no longer need Incode SDK you might notify the system that the download resources are no longer needed, and they will be removed as soon as possible.
IncodeSdk.removeOnDemandResources()
.then(() => {
console.log("removeOnDemandResources result: ", result.status);
//if status is 'success' system is notified that resources are no longer needed and will remove them as soon as possible
})
.catch( (e) => {
//
});
Additional iOS setup
To use Dynamic Delivery inside your iOS app please add this line to the top of the Podfile for your target:
pod 'react-native-incode-sdk/ODR', :path => '../node_modules/react-native-incode-sdk/ios'
After doing the pod install
now, the Dynamic delivery version will be used and the resources won't be embedded in the app, but you would have to download them using the method mentioned in Download Resources
section.
Additional Android setup
You need to be using Android App Bundle format when publishing your app.
Dynamic Feature Modules are supported on devices running Android 5.0 (API level 21) or higher.
That means that on other devices, dynamic modules will be installed together with the app.
Step 1
In your module-level app/build.gradle
, add the following code to android{}
closure:
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
Step 2
In your application Application
class, make it extend SplitCompatApplication
import com.google.android.play.core.splitcompat.SplitCompatApplication;
public class BaseApplication extends SplitCompatApplication {
...
}
SplitCompatApplication simply overrides ContextWrapper.attachBaseContext() to include SplitCompat.install(Context applicationContext). If you don’t want your Application class to extend SplitCompatApplication, you can override the attachBaseContext() method manually, as follows:
@Override
protected void attachBaseContext(Context base) {
super.attachBaseContext(base);
// Emulates installation of future on demand modules using SplitCompat.
SplitCompat.install(this);
}
Step 3
Enable MultiDex
If your minSdkVersion
is set to 21 or higher, MultiDex is enabled by default and you can skip this step.
If your minSdkVersion
is lower than 21, follow this guide from Google to enable MultiDex:
https://developer.android.com/studio/build/multidex#mdex-gradle
Step 4
Create new Dynamic Module in Android Studio:
File
-->New
-->New Module
Select
Dynamic Feature Module
ClickNext
Enter
Module name
, for example:incode_core
ClickNext
Set your desired
Module Title
, for example:Incode Core
SetInstall-time inclusion
toDo not include module at install-time (on-demand only)
EnableFusing
Click
Finish
to create the Dynamic Module.
Note: Incode SDK uses incode_core
as the default name of the on-demand module. If you provide a module name different than incode_core
in this step make sure to override the default moduleName
parameter with the same value when you call downloadOnDemandResources
API, ie:
IncodeSdk.downloadOnDemandResources({moduleName: 'MyCustomModuleName'})
.then((result) => {
console.log('downloadCompleted, status: ', result.status);
// if status is 'success' resources are downloaded, otherwise download has to be retried
})
.catch((e) => {
// Check for specific error values and try retrying the download
});
Step 5
Wait for Android Studio to finish syncing.
After the process is complete, take some time to review the changes that have been made to the project by Android Studio:
You have a new module named
incode_core
In your module-level
app/build.gradle
, the following line has been added toandroid{}
closure:dynamicFeatures = [':incode_core']
In your
app/res/values/strings.xml
file, a string has been added:<string name="title_incode_core">Incode Core</string>
Step 6
Open incode_core/build.gradle
Add the following code to android{}
closure:
compileOptions {
sourceCompatibility JavaVersion.VERSION_1_8
targetCompatibility JavaVersion.VERSION_1_8
}
Add the following dependency:
implementation 'com.incode.sdk:core-light:2.0.0' // Required core dependency
The incode_core
module does not need to contain any other code.
Required core-light
dependency contains the image processing libraries that are used in the Welcome SDK.
Step 7
Enable splits - optimized APKs for each user’s device configuration. If your app is using App Signing by Google Play you can skip this step because splits are enabled by default. Check https://reactnative.dev/docs/signed-apk-android#migrating-old-android-react-native-apps-to-use-app-signing-by-google-play and https://developer.android.com/studio/build/configure-apk-splits for more information.
In your module-level app/build.gradle
, add the following code to android{}
closure:
splits {
abi {
reset()
enable true
universalApk false // If true, also generate a universal APK
include "armeabi-v7a", "x86", "arm64-v8a", "x86_64"
}
}
Gradle configuration is done!
Step 8 - Testing your implementation
If you now try to run your application from Android Studio, the Dynamic Module will be installed together with the app,
and the code for downloading and installing the module will never execute.
To locally simulate requesting, downloading, and installing modules from the Play Store use the bundletool. Download and install bundletool from https://github.com/google/bundletool/releases
Use the following commands to build and install an apk for local test:
./gradlew bundle
bundletool build-apks --local-testing
--bundle app/build/outputs/bundle/release/app-release.aab
--output my_app.apks
bundletool install-apks --apks my_app.apks
To actually test your implementation (downloading and installing the Dynamic Module),
you need to upload your App Bundle to Google Play.
To support on demand modules, Google Play requires that you upload your application using the Android App Bundle format so that it can handle the on demand requests from the server side.
Publishing the project on the Play Console requires some graphic assets,
so for testing purposes you can use these sample assets from the Google codelab:
https://github.com/googlecodelabs/android-dynamic-features/tree/master/graphic_assets
To be able to quickly test your application, without waiting for any approval, you can publish your application in the Internal Testing track.
For a step by step guide on how to publish a new application on the Google Play Store, you can follow the Play Console Guide on how to upload an app
Migration guide
Migration to 4.x
initialize
method changes:
- Removed
conferenceUrl
parameter method that wasn't used. - Removed
regionCode
parameter, now part ofOnboardingSessionConfig
object that is available as an optional parameter instartOnboarding
,createOnboardingSession
, andsetOnboardingSession
methods.
startOnboarding
method changes:
- Removed
interviewId
parameter, now part ofsessionConfig
object that is available as an optional parameter - Removed
configurationId
parameter, now part ofsessionConfig
object that is available as an optional parameter - Added optional
sessionConfig
parameter,OnboardingSessionConfig
object where you can specify all session parameters.
createOnboardingSession
method changes:
- Removed unused
verifiers
parameter - Added optional
sessionConfig
parameter,OnboardingSessionConfig
object where you can specify all session parameters. - Added
validationModules
parameter, where you can specify the list of validation modules used for a session.
startOnboardingSection
method changes:
- Removed
interviewId
parameter. Please specify it either increateOnboardingSession
orsetOnboardingSession
viasessionConfig
parameter.
Migration to 3.2.x
How ID results were provided before 3.2.x:
const setupListeners = () => {
// returns a callback to unregister your listener, e.g. when your screen is getting unmounted
const complete = IncodeSdk.onStepCompleted;
return [
complete({
module: 'IdScan',
listener: (e) => {
console.log('ID scan result:', e.result);
},
}),
];
};
How to listen for ID results in 3.2.x:
const setupListeners = () => {
// returns a callback to unregister your listener, e.g. when your screen is getting unmounted
const complete = IncodeSdk.onStepCompleted;
return [
complete({
module: 'IdScanFront',
listener: (e) => {
console.log('ID scan front result:', e.result);
},
}),
complete({
module: 'IdScanBack',
listener: (e) => {
console.log('ID scan back result: ', e.result);
},
}),
complete({
module: 'ProcessId',
listener: (e) => {
console.log('ProcessId result: ', e.result.extendedOcrData);
},
}),
];
};
iOS Podfile changes
Standard SDK:
Replace
pod 'react-native-incode-sdk', :path => '../node_modules/react-native-incode-sdk'
with:
pod 'react-native-incode-sdk', :path => '../node_modules/react-native-incode-sdk/ios'
Dynamic Delivery:
Replace
pod 'react-native-incode-sdk/ODR', :path => '../node_modules/react-native-incode-sdk'
with:
pod 'react-native-incode-sdk/ODR', :path => '../node_modules/react-native-incode-sdk/ios'
Migration to 3.1.x
If you wish to keep using standard/Non-Dynamic Delivery version of the SDK on Android platform, please add this dependency in you app's build.gradle
:
implementation 'com.incode.sdk:core-light:2.0.0'