Skip to main content

EyesOnIt TypeScript SDK

The EyesOnIt TypeScript SDK wraps the EyesOnIt REST API with typed request and response classes for image processing, stream monitoring, archive and live search, video processing, and face recognition.

In This Section

Installation

Clone the repository, install dependencies, and build the SDK:

git clone https://github.com/EyesOnIt-AI/EyesOnIt-typescript-sdk.git
cd EyesOnIt-typescript-sdk
npm install
npm run build

If you want a distributable package for another project:

npm pack

That produces a tarball such as eyesonit-typescript-sdk-4.0.0.tgz, which you can install in another project with:

npm install ./eyesonit-typescript-sdk-4.0.0.tgz
Package Exports

Import client, input, output, and model classes from eyesonit-typescript-sdk.

Imports

import {
EyesOnItAPI,
EOIProcessImageInputs,
EOIRegion,
EOIVertex,
EOIDetectionConfig,
EOIObjectDescription,
} from "eyesonit-typescript-sdk";

Create A Client

EyesOnItAPI is the main client class.

const client = new EyesOnItAPI("http://localhost:8000");

Constructor parameters:

  • apiBasePath: base URL for the EyesOnIt server, such as http://localhost:8000
  • restHandler: optional custom handler implementing IEOIRESTHandler
  • customLogger: optional logger object; if omitted, the SDK uses its internal logger

If you do not supply a REST handler, the SDK uses its built-in Axios implementation.

Quick Start: Process One Image

Build most configuration objects by assigning properties after construction. EOIRegion and EOIDetectionConfig are the main examples.

import fs from "node:fs";
import {
EyesOnItAPI,
EOIProcessImageInputs,
EOIRegion,
EOIVertex,
EOIDetectionConfig,
EOIObjectDescription,
} from "eyesonit-typescript-sdk";

async function main() {
const client = new EyesOnItAPI("http://localhost:8000");

const region = new EOIRegion();
region.name = "Loading Dock";
region.polygon = [
new EOIVertex(0, 0),
new EOIVertex(1280, 0),
new EOIVertex(1280, 720),
new EOIVertex(0, 720),
];

const detectionConfig = EOIDetectionConfig.default();
detectionConfig.class_name = "person";
detectionConfig.class_threshold = 25;
detectionConfig.object_descriptions = [
new EOIObjectDescription("person wearing a safety vest", false, true, 70),
];

region.detection_configs = [detectionConfig];

const base64Image = fs.readFileSync("frame.jpg").toString("base64");
const inputs = new EOIProcessImageInputs(base64Image, [region], true);
const response = await client.processImage(inputs);

if (!response.success) {
throw new Error(response.message);
}

for (const detection of response.detections ?? []) {
console.log({
region: detection.region,
bestDescription: detection.getMaxConfidenceDescription(),
objectCount: detection.getDetectedObjects()?.length ?? 0,
});
}
}

main().catch(console.error);

Stream Monitoring Workflow

A typical stream workflow is:

  1. Create an EOIAddStreamInputs object with stream metadata, regions, and optional lines, notifications, recording, or effects.
  2. Call addStream(...).
  3. Call monitorStream(...) for the same stream URL.
  4. Poll getLastDetectionInfo(...), getVideoFrame(...), or getStreamDetails(...) as needed.
  5. Call stopMonitoringStream(...) and removeStream(...) when you are done.

Example:

import {
EyesOnItAPI,
EOIAddStreamInputs,
EOIMonitorStreamInputs,
EOIRegion,
EOIVertex,
EOIDetectionConfig,
} from "eyesonit-typescript-sdk";

const client = new EyesOnItAPI("http://localhost:8000");

const region = new EOIRegion();
region.name = "Warehouse Floor";
region.polygon = [
new EOIVertex(0, 0),
new EOIVertex(1280, 0),
new EOIVertex(1280, 720),
new EOIVertex(0, 720),
];
region.detection_configs = [EOIDetectionConfig.default()];

const addResponse = await client.addStream(
new EOIAddStreamInputs(
"rtsp://camera-01/live",
"Warehouse Camera 01",
5,
true,
["image"],
[region],
undefined,
undefined,
undefined,
undefined,
),
);

if (!addResponse.success) {
throw new Error(addResponse.message);
}

const monitorResponse = await client.monitorStream(
new EOIMonitorStreamInputs("rtsp://camera-01/live", null),
);

if (!monitorResponse.success) {
throw new Error(monitorResponse.message);
}

Search Workflows

The SDK exposes two search entry points:

  • searchArchive(inputs: EOIArchiveSearchInputs)
  • searchLive(inputs: EOILiveSearchInputs)

Both inherit from EOISearchInputs, so they can search by:

  • natural language via object_description
  • face recognition via face_match_type, face_person_id, or face_group_id
  • similarity matching via similarity

searchLive additionally supports:

  • duration_seconds
  • notification
  • pauseLiveSearch(...)
  • resumeLiveSearch(...)
  • cancelLiveSearch(...)

Example live search:

import {
EyesOnItAPI,
EOILiveSearchInputs,
EOIUpdateLiveSearchInputs,
} from "eyesonit-typescript-sdk";

const client = new EyesOnItAPI("http://localhost:8000");

const inputs = new EOILiveSearchInputs();
inputs.class_name = "person";
inputs.search_type = "natural_language";
inputs.object_description = "person wearing a red backpack";
inputs.alert_threshold = 80;
inputs.stream_list = ["rtsp://camera-01/live"];
inputs.duration_seconds = 300;

const liveResponse = await client.searchLive(inputs);

if (!liveResponse.success) {
throw new Error(liveResponse.message);
}

await client.pauseLiveSearch(new EOIUpdateLiveSearchInputs(liveResponse.search_id));
await client.resumeLiveSearch(new EOIUpdateLiveSearchInputs(liveResponse.search_id));
await client.cancelLiveSearch(new EOIUpdateLiveSearchInputs(liveResponse.search_id));

Face Recognition Workflow

Use EyesOnItAPI to manage face recognition groups and people.

Typical flow:

  1. Create a group with addFacerecGroup(...)
  2. Create a person with addFacerecPerson(...)
  3. Attach one or more images to that person
  4. Use face recognition in stream detection config or search inputs
  5. Query with searchFacerecGroupNames(...), searchFacerecPeopleNames(...), or getFacerecPersonDetails(...)

Example:

import fs from "node:fs";
import {
EyesOnItAPI,
EOIAddFacerecGroupInputs,
EOIAddFacerecPersonInputs,
} from "eyesonit-typescript-sdk";

const client = new EyesOnItAPI("http://localhost:8000");

await client.addFacerecGroup(
new EOIAddFacerecGroupInputs(
"employees",
"Employees",
"Known employees allowed in restricted areas",
),
);

const person = new EOIAddFacerecPersonInputs(
"jane-doe",
"Jane Doe",
["employees"],
);

person.addImageBase64(
fs.readFileSync("jane-doe.jpg").toString("base64"),
"jane-doe.jpg",
);

await client.addFacerecPerson(person);

Process Video Jobs

processVideo(...) accepts EOIProcessVideoInputs and returns EOIProcessVideoResponse, which exposes video_id on success. The input model supports:

  • video file paths and output paths
  • rotation
  • frame rate
  • indexing for search
  • regions and lines
  • real_time
  • recording and effects
  • video_start_local_time, start_seconds, and end_seconds
  • mode, base_image_path, plugins, and validation

Use Models and Configuration for the field-level reference.

Update Runtime Configuration

updateConfig(...) now accepts EOIUpdateConfigInputs.

import { EyesOnItAPI, EOIUpdateConfigInputs } from "eyesonit-typescript-sdk";

const client = new EyesOnItAPI("http://localhost:8000");

const response = await client.updateConfig(
new EOIUpdateConfigInputs({
some_server_setting: true,
}),
);

Validation Rules That Commonly Matter

The SDK validates requests before sending them. Common failures come from:

  • empty stream URLs
  • stream names, region names, or line names shorter than 3 characters
  • polygons with fewer than 3 vertices
  • line definitions with fewer than 2 vertices
  • unsupported class names; valid values are person, vehicle, bag, animal, and unknown
  • object-description thresholds outside the validator's accepted range of 1 to 99
  • live-search thresholds that are not greater than 0 and less than 100
  • archive search date ranges before 2020-01-01T00:00:00Z or with start_date_time >= end_date_time
  • missing face recognition IDs for person or group match modes
  • similarity searches without at least one valid seed image or seed_id